00:00:00.000 Started by upstream project "autotest-per-patch" build number 127103 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.048 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/iscsi-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.050 The recommended git tool is: git 00:00:00.050 using credential 00000000-0000-0000-0000-000000000002 00:00:00.052 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/iscsi-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.105 Fetching changes from the remote Git repository 00:00:00.107 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.167 Using shallow fetch with depth 1 00:00:00.167 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.167 > git --version # timeout=10 00:00:00.215 > git --version # 'git version 2.39.2' 00:00:00.215 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.245 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.245 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.589 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.600 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.612 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:04.612 > git config core.sparsecheckout # timeout=10 00:00:04.623 > git read-tree -mu HEAD # timeout=10 00:00:04.638 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:04.692 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:04.692 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:04.800 [Pipeline] Start of Pipeline 00:00:04.812 [Pipeline] library 00:00:04.813 Loading library shm_lib@master 00:00:04.813 Library shm_lib@master is cached. Copying from home. 00:00:04.824 [Pipeline] node 00:00:04.833 Running on VM-host-SM4 in /var/jenkins/workspace/iscsi-uring-vg-autotest_2 00:00:04.835 [Pipeline] { 00:00:04.845 [Pipeline] catchError 00:00:04.846 [Pipeline] { 00:00:04.870 [Pipeline] wrap 00:00:04.891 [Pipeline] { 00:00:04.907 [Pipeline] stage 00:00:04.910 [Pipeline] { (Prologue) 00:00:04.923 [Pipeline] echo 00:00:04.924 Node: VM-host-SM4 00:00:04.928 [Pipeline] cleanWs 00:00:04.934 [WS-CLEANUP] Deleting project workspace... 00:00:04.934 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.939 [WS-CLEANUP] done 00:00:05.107 [Pipeline] setCustomBuildProperty 00:00:05.169 [Pipeline] httpRequest 00:00:05.190 [Pipeline] echo 00:00:05.191 Sorcerer 10.211.164.101 is alive 00:00:05.197 [Pipeline] httpRequest 00:00:05.201 HttpMethod: GET 00:00:05.201 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:05.202 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:05.203 Response Code: HTTP/1.1 200 OK 00:00:05.204 Success: Status code 200 is in the accepted range: 200,404 00:00:05.204 Saving response body to /var/jenkins/workspace/iscsi-uring-vg-autotest_2/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:05.762 [Pipeline] sh 00:00:06.042 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.057 [Pipeline] httpRequest 00:00:06.072 [Pipeline] echo 00:00:06.074 Sorcerer 10.211.164.101 is alive 00:00:06.081 [Pipeline] httpRequest 00:00:06.085 HttpMethod: GET 00:00:06.086 URL: http://10.211.164.101/packages/spdk_0c322284fc8cbedc534a5a5ba162764d1e9319da.tar.gz 00:00:06.086 Sending request to url: http://10.211.164.101/packages/spdk_0c322284fc8cbedc534a5a5ba162764d1e9319da.tar.gz 00:00:06.093 Response Code: HTTP/1.1 200 OK 00:00:06.093 Success: Status code 200 is in the accepted range: 200,404 00:00:06.093 Saving response body to /var/jenkins/workspace/iscsi-uring-vg-autotest_2/spdk_0c322284fc8cbedc534a5a5ba162764d1e9319da.tar.gz 00:00:25.973 [Pipeline] sh 00:00:26.256 + tar --no-same-owner -xf spdk_0c322284fc8cbedc534a5a5ba162764d1e9319da.tar.gz 00:00:28.828 [Pipeline] sh 00:00:29.107 + git -C spdk log --oneline -n5 00:00:29.107 0c322284f scripts/nvmf_perf: move SPDK target specific parameters 00:00:29.107 33352e0b6 scripts/perf_nvmf: move sys_config to Server class 00:00:29.107 0ce8280fe scripts/nvmf_perf: remove bdev information from output 00:00:29.107 e0435b1e7 scripts/nvmf_perf: add server factory function 00:00:29.107 920322689 scripts/nvmf_perf: set initiator num_cores earlier 00:00:29.125 [Pipeline] writeFile 00:00:29.141 [Pipeline] sh 00:00:29.424 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:29.434 [Pipeline] sh 00:00:29.732 + cat autorun-spdk.conf 00:00:29.732 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.732 SPDK_TEST_ISCSI=1 00:00:29.732 SPDK_TEST_URING=1 00:00:29.732 SPDK_RUN_UBSAN=1 00:00:29.732 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:29.777 RUN_NIGHTLY=0 00:00:29.778 [Pipeline] } 00:00:29.795 [Pipeline] // stage 00:00:29.810 [Pipeline] stage 00:00:29.812 [Pipeline] { (Run VM) 00:00:29.827 [Pipeline] sh 00:00:30.108 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:30.108 + echo 'Start stage prepare_nvme.sh' 00:00:30.108 Start stage prepare_nvme.sh 00:00:30.108 + [[ -n 4 ]] 00:00:30.108 + disk_prefix=ex4 00:00:30.108 + [[ -n /var/jenkins/workspace/iscsi-uring-vg-autotest_2 ]] 00:00:30.108 + [[ -e /var/jenkins/workspace/iscsi-uring-vg-autotest_2/autorun-spdk.conf ]] 00:00:30.108 + source /var/jenkins/workspace/iscsi-uring-vg-autotest_2/autorun-spdk.conf 00:00:30.108 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.108 ++ SPDK_TEST_ISCSI=1 00:00:30.108 ++ SPDK_TEST_URING=1 00:00:30.108 ++ SPDK_RUN_UBSAN=1 00:00:30.108 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.108 ++ RUN_NIGHTLY=0 00:00:30.108 + cd /var/jenkins/workspace/iscsi-uring-vg-autotest_2 00:00:30.108 + nvme_files=() 00:00:30.108 + declare -A nvme_files 00:00:30.108 + backend_dir=/var/lib/libvirt/images/backends 00:00:30.108 + nvme_files['nvme.img']=5G 00:00:30.108 + nvme_files['nvme-cmb.img']=5G 00:00:30.108 + nvme_files['nvme-multi0.img']=4G 00:00:30.108 + nvme_files['nvme-multi1.img']=4G 00:00:30.108 + nvme_files['nvme-multi2.img']=4G 00:00:30.108 + nvme_files['nvme-openstack.img']=8G 00:00:30.108 + nvme_files['nvme-zns.img']=5G 00:00:30.108 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:30.108 + (( SPDK_TEST_FTL == 1 )) 00:00:30.108 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:30.108 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:30.108 + for nvme in "${!nvme_files[@]}" 00:00:30.108 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:30.108 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.108 + for nvme in "${!nvme_files[@]}" 00:00:30.108 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:30.108 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:30.108 + for nvme in "${!nvme_files[@]}" 00:00:30.108 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:30.368 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:30.368 + for nvme in "${!nvme_files[@]}" 00:00:30.368 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:30.368 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:30.368 + for nvme in "${!nvme_files[@]}" 00:00:30.368 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:30.627 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.627 + for nvme in "${!nvme_files[@]}" 00:00:30.627 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:30.627 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.627 + for nvme in "${!nvme_files[@]}" 00:00:30.627 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:30.887 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:30.887 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:30.887 + echo 'End stage prepare_nvme.sh' 00:00:30.887 End stage prepare_nvme.sh 00:00:30.898 [Pipeline] sh 00:00:31.181 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:31.181 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:00:31.181 00:00:31.181 DIR=/var/jenkins/workspace/iscsi-uring-vg-autotest_2/spdk/scripts/vagrant 00:00:31.181 SPDK_DIR=/var/jenkins/workspace/iscsi-uring-vg-autotest_2/spdk 00:00:31.181 VAGRANT_TARGET=/var/jenkins/workspace/iscsi-uring-vg-autotest_2 00:00:31.181 HELP=0 00:00:31.181 DRY_RUN=0 00:00:31.181 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:31.181 NVME_DISKS_TYPE=nvme,nvme, 00:00:31.181 NVME_AUTO_CREATE=0 00:00:31.181 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:31.181 NVME_CMB=,, 00:00:31.181 NVME_PMR=,, 00:00:31.181 NVME_ZNS=,, 00:00:31.181 NVME_MS=,, 00:00:31.181 NVME_FDP=,, 00:00:31.181 SPDK_VAGRANT_DISTRO=fedora38 00:00:31.181 SPDK_VAGRANT_VMCPU=10 00:00:31.181 SPDK_VAGRANT_VMRAM=12288 00:00:31.181 SPDK_VAGRANT_PROVIDER=libvirt 00:00:31.181 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:31.181 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:31.181 SPDK_OPENSTACK_NETWORK=0 00:00:31.181 VAGRANT_PACKAGE_BOX=0 00:00:31.181 VAGRANTFILE=/var/jenkins/workspace/iscsi-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:31.181 FORCE_DISTRO=true 00:00:31.181 VAGRANT_BOX_VERSION= 00:00:31.181 EXTRA_VAGRANTFILES= 00:00:31.181 NIC_MODEL=e1000 00:00:31.181 00:00:31.181 mkdir: created directory '/var/jenkins/workspace/iscsi-uring-vg-autotest_2/fedora38-libvirt' 00:00:31.181 /var/jenkins/workspace/iscsi-uring-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/iscsi-uring-vg-autotest_2 00:00:35.382 Bringing machine 'default' up with 'libvirt' provider... 00:00:35.641 ==> default: Creating image (snapshot of base box volume). 00:00:35.641 ==> default: Creating domain with the following settings... 00:00:35.641 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721849943_16ecbea45ea31907b585 00:00:35.641 ==> default: -- Domain type: kvm 00:00:35.641 ==> default: -- Cpus: 10 00:00:35.641 ==> default: -- Feature: acpi 00:00:35.641 ==> default: -- Feature: apic 00:00:35.641 ==> default: -- Feature: pae 00:00:35.641 ==> default: -- Memory: 12288M 00:00:35.641 ==> default: -- Memory Backing: hugepages: 00:00:35.641 ==> default: -- Management MAC: 00:00:35.641 ==> default: -- Loader: 00:00:35.641 ==> default: -- Nvram: 00:00:35.641 ==> default: -- Base box: spdk/fedora38 00:00:35.641 ==> default: -- Storage pool: default 00:00:35.641 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721849943_16ecbea45ea31907b585.img (20G) 00:00:35.641 ==> default: -- Volume Cache: default 00:00:35.641 ==> default: -- Kernel: 00:00:35.641 ==> default: -- Initrd: 00:00:35.641 ==> default: -- Graphics Type: vnc 00:00:35.641 ==> default: -- Graphics Port: -1 00:00:35.641 ==> default: -- Graphics IP: 127.0.0.1 00:00:35.641 ==> default: -- Graphics Password: Not defined 00:00:35.641 ==> default: -- Video Type: cirrus 00:00:35.641 ==> default: -- Video VRAM: 9216 00:00:35.641 ==> default: -- Sound Type: 00:00:35.641 ==> default: -- Keymap: en-us 00:00:35.641 ==> default: -- TPM Path: 00:00:35.641 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:35.641 ==> default: -- Command line args: 00:00:35.641 ==> default: -> value=-device, 00:00:35.641 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:35.641 ==> default: -> value=-drive, 00:00:35.641 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:35.641 ==> default: -> value=-device, 00:00:35.641 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.641 ==> default: -> value=-device, 00:00:35.641 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:35.641 ==> default: -> value=-drive, 00:00:35.641 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:35.641 ==> default: -> value=-device, 00:00:35.641 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.641 ==> default: -> value=-drive, 00:00:35.641 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:35.641 ==> default: -> value=-device, 00:00:35.641 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.641 ==> default: -> value=-drive, 00:00:35.641 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:35.641 ==> default: -> value=-device, 00:00:35.641 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.899 ==> default: Creating shared folders metadata... 00:00:35.899 ==> default: Starting domain. 00:00:37.804 ==> default: Waiting for domain to get an IP address... 00:00:55.881 ==> default: Waiting for SSH to become available... 00:00:55.881 ==> default: Configuring and enabling network interfaces... 00:01:00.069 default: SSH address: 192.168.121.127:22 00:01:00.069 default: SSH username: vagrant 00:01:00.069 default: SSH auth method: private key 00:01:02.596 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/iscsi-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:10.714 ==> default: Mounting SSHFS shared folder... 00:01:12.616 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/iscsi-uring-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:12.616 ==> default: Checking Mount.. 00:01:13.989 ==> default: Folder Successfully Mounted! 00:01:13.989 ==> default: Running provisioner: file... 00:01:14.558 default: ~/.gitconfig => .gitconfig 00:01:15.126 00:01:15.126 SUCCESS! 00:01:15.126 00:01:15.126 cd to /var/jenkins/workspace/iscsi-uring-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:15.126 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:15.126 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/iscsi-uring-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:15.126 00:01:15.135 [Pipeline] } 00:01:15.153 [Pipeline] // stage 00:01:15.162 [Pipeline] dir 00:01:15.162 Running in /var/jenkins/workspace/iscsi-uring-vg-autotest_2/fedora38-libvirt 00:01:15.164 [Pipeline] { 00:01:15.177 [Pipeline] catchError 00:01:15.179 [Pipeline] { 00:01:15.193 [Pipeline] sh 00:01:15.474 + vagrant ssh-config --host vagrant 00:01:15.474 + sed -ne /^Host/,$p 00:01:15.474 + tee ssh_conf 00:01:19.665 Host vagrant 00:01:19.665 HostName 192.168.121.127 00:01:19.665 User vagrant 00:01:19.665 Port 22 00:01:19.665 UserKnownHostsFile /dev/null 00:01:19.665 StrictHostKeyChecking no 00:01:19.665 PasswordAuthentication no 00:01:19.665 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:19.665 IdentitiesOnly yes 00:01:19.665 LogLevel FATAL 00:01:19.665 ForwardAgent yes 00:01:19.665 ForwardX11 yes 00:01:19.665 00:01:19.679 [Pipeline] withEnv 00:01:19.681 [Pipeline] { 00:01:19.696 [Pipeline] sh 00:01:19.977 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:19.977 source /etc/os-release 00:01:19.977 [[ -e /image.version ]] && img=$(< /image.version) 00:01:19.977 # Minimal, systemd-like check. 00:01:19.977 if [[ -e /.dockerenv ]]; then 00:01:19.977 # Clear garbage from the node's name: 00:01:19.977 # agt-er_autotest_547-896 -> autotest_547-896 00:01:19.977 # $HOSTNAME is the actual container id 00:01:19.977 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:19.977 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:19.977 # We can assume this is a mount from a host where container is running, 00:01:19.977 # so fetch its hostname to easily identify the target swarm worker. 00:01:19.977 container="$(< /etc/hostname) ($agent)" 00:01:19.977 else 00:01:19.977 # Fallback 00:01:19.977 container=$agent 00:01:19.977 fi 00:01:19.977 fi 00:01:19.977 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:19.977 00:01:20.247 [Pipeline] } 00:01:20.268 [Pipeline] // withEnv 00:01:20.277 [Pipeline] setCustomBuildProperty 00:01:20.292 [Pipeline] stage 00:01:20.294 [Pipeline] { (Tests) 00:01:20.313 [Pipeline] sh 00:01:20.596 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:20.866 [Pipeline] sh 00:01:21.144 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:21.417 [Pipeline] timeout 00:01:21.418 Timeout set to expire in 45 min 00:01:21.420 [Pipeline] { 00:01:21.435 [Pipeline] sh 00:01:21.715 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:22.283 HEAD is now at 0c322284f scripts/nvmf_perf: move SPDK target specific parameters 00:01:22.296 [Pipeline] sh 00:01:22.577 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:22.851 [Pipeline] sh 00:01:23.132 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:23.407 [Pipeline] sh 00:01:23.688 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=iscsi-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:23.948 ++ readlink -f spdk_repo 00:01:23.948 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:23.948 + [[ -n /home/vagrant/spdk_repo ]] 00:01:23.948 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:23.948 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:23.948 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:23.948 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:23.948 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:23.948 + [[ iscsi-uring-vg-autotest == pkgdep-* ]] 00:01:23.948 + cd /home/vagrant/spdk_repo 00:01:23.948 + source /etc/os-release 00:01:23.948 ++ NAME='Fedora Linux' 00:01:23.948 ++ VERSION='38 (Cloud Edition)' 00:01:23.948 ++ ID=fedora 00:01:23.948 ++ VERSION_ID=38 00:01:23.948 ++ VERSION_CODENAME= 00:01:23.948 ++ PLATFORM_ID=platform:f38 00:01:23.948 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:23.948 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:23.948 ++ LOGO=fedora-logo-icon 00:01:23.948 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:23.948 ++ HOME_URL=https://fedoraproject.org/ 00:01:23.948 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:23.948 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:23.948 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:23.948 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:23.948 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:23.948 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:23.948 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:23.948 ++ SUPPORT_END=2024-05-14 00:01:23.948 ++ VARIANT='Cloud Edition' 00:01:23.948 ++ VARIANT_ID=cloud 00:01:23.948 + uname -a 00:01:23.948 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:23.948 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:24.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:24.520 Hugepages 00:01:24.520 node hugesize free / total 00:01:24.520 node0 1048576kB 0 / 0 00:01:24.520 node0 2048kB 0 / 0 00:01:24.520 00:01:24.520 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.520 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:24.520 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:24.520 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:01:24.520 + rm -f /tmp/spdk-ld-path 00:01:24.520 + source autorun-spdk.conf 00:01:24.520 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.520 ++ SPDK_TEST_ISCSI=1 00:01:24.520 ++ SPDK_TEST_URING=1 00:01:24.520 ++ SPDK_RUN_UBSAN=1 00:01:24.520 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.520 ++ RUN_NIGHTLY=0 00:01:24.520 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:24.520 + [[ -n '' ]] 00:01:24.520 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:24.520 + for M in /var/spdk/build-*-manifest.txt 00:01:24.520 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.520 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.520 + for M in /var/spdk/build-*-manifest.txt 00:01:24.520 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.520 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.520 ++ uname 00:01:24.520 + [[ Linux == \L\i\n\u\x ]] 00:01:24.520 + sudo dmesg -T 00:01:24.520 + sudo dmesg --clear 00:01:24.520 + dmesg_pid=5173 00:01:24.520 + sudo dmesg -Tw 00:01:24.520 + [[ Fedora Linux == FreeBSD ]] 00:01:24.520 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.520 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.520 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.520 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.520 + export FIO_BIN=/usr/src/fio-static/fio 00:01:24.520 + FIO_BIN=/usr/src/fio-static/fio 00:01:24.520 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.520 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.520 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.521 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.521 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.521 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.521 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.521 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.521 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:24.521 Test configuration: 00:01:24.780 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.780 SPDK_TEST_ISCSI=1 00:01:24.780 SPDK_TEST_URING=1 00:01:24.780 SPDK_RUN_UBSAN=1 00:01:24.780 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.780 RUN_NIGHTLY=0 19:39:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:24.780 19:39:53 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.780 19:39:53 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.780 19:39:53 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.780 19:39:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.780 19:39:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.780 19:39:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.780 19:39:53 -- paths/export.sh@5 -- $ export PATH 00:01:24.780 19:39:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.780 19:39:53 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:24.780 19:39:53 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:24.780 19:39:53 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721849993.XXXXXX 00:01:24.780 19:39:53 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721849993.kdQ91x 00:01:24.780 19:39:53 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:24.780 19:39:53 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:24.780 19:39:53 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:24.780 19:39:53 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:24.780 19:39:53 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.780 19:39:53 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:24.780 19:39:53 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:24.780 19:39:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.780 19:39:53 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:24.780 19:39:53 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:24.780 19:39:53 -- pm/common@17 -- $ local monitor 00:01:24.780 19:39:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.780 19:39:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.780 19:39:53 -- pm/common@25 -- $ sleep 1 00:01:24.780 19:39:53 -- pm/common@21 -- $ date +%s 00:01:24.780 19:39:53 -- pm/common@21 -- $ date +%s 00:01:24.780 19:39:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721849993 00:01:24.780 19:39:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721849993 00:01:24.780 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721849993_collect-vmstat.pm.log 00:01:24.780 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721849993_collect-cpu-load.pm.log 00:01:25.714 19:39:54 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:25.714 19:39:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.714 19:39:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.714 19:39:54 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:25.714 19:39:54 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.714 Wed Jul 24 07:39:54 PM UTC 2024 00:01:25.714 19:39:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.714 v24.09-pre-317-g0c322284f 00:01:25.714 19:39:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:25.714 19:39:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.714 19:39:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.714 19:39:54 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:25.714 19:39:54 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:25.714 19:39:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.714 ************************************ 00:01:25.714 START TEST ubsan 00:01:25.714 ************************************ 00:01:25.714 using ubsan 00:01:25.714 19:39:54 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:25.714 00:01:25.714 real 0m0.000s 00:01:25.714 user 0m0.000s 00:01:25.714 sys 0m0.000s 00:01:25.714 19:39:54 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:25.714 19:39:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.714 ************************************ 00:01:25.714 END TEST ubsan 00:01:25.714 ************************************ 00:01:25.973 19:39:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.973 19:39:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.973 19:39:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.973 19:39:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:25.973 19:39:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:25.973 19:39:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:25.973 19:39:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:25.973 19:39:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:25.973 19:39:54 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:25.973 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:25.973 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:26.540 Using 'verbs' RDMA provider 00:01:42.795 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:57.677 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:57.677 Creating mk/config.mk...done. 00:01:57.677 Creating mk/cc.flags.mk...done. 00:01:57.677 Type 'make' to build. 00:01:57.678 19:40:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:57.678 19:40:25 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:57.678 19:40:25 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:57.678 19:40:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.678 ************************************ 00:01:57.678 START TEST make 00:01:57.678 ************************************ 00:01:57.678 19:40:25 make -- common/autotest_common.sh@1125 -- $ make -j10 00:01:57.678 make[1]: Nothing to be done for 'all'. 00:02:07.698 The Meson build system 00:02:07.698 Version: 1.3.1 00:02:07.698 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:07.698 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:07.698 Build type: native build 00:02:07.698 Program cat found: YES (/usr/bin/cat) 00:02:07.698 Project name: DPDK 00:02:07.698 Project version: 24.03.0 00:02:07.698 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:07.698 C linker for the host machine: cc ld.bfd 2.39-16 00:02:07.698 Host machine cpu family: x86_64 00:02:07.698 Host machine cpu: x86_64 00:02:07.698 Message: ## Building in Developer Mode ## 00:02:07.698 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:07.698 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:07.698 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:07.698 Program python3 found: YES (/usr/bin/python3) 00:02:07.698 Program cat found: YES (/usr/bin/cat) 00:02:07.698 Compiler for C supports arguments -march=native: YES 00:02:07.698 Checking for size of "void *" : 8 00:02:07.698 Checking for size of "void *" : 8 (cached) 00:02:07.698 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:07.698 Library m found: YES 00:02:07.698 Library numa found: YES 00:02:07.698 Has header "numaif.h" : YES 00:02:07.698 Library fdt found: NO 00:02:07.698 Library execinfo found: NO 00:02:07.698 Has header "execinfo.h" : YES 00:02:07.698 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:07.698 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:07.698 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:07.698 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:07.698 Run-time dependency openssl found: YES 3.0.9 00:02:07.698 Run-time dependency libpcap found: YES 1.10.4 00:02:07.698 Has header "pcap.h" with dependency libpcap: YES 00:02:07.698 Compiler for C supports arguments -Wcast-qual: YES 00:02:07.698 Compiler for C supports arguments -Wdeprecated: YES 00:02:07.698 Compiler for C supports arguments -Wformat: YES 00:02:07.698 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:07.698 Compiler for C supports arguments -Wformat-security: NO 00:02:07.698 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.698 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:07.698 Compiler for C supports arguments -Wnested-externs: YES 00:02:07.698 Compiler for C supports arguments -Wold-style-definition: YES 00:02:07.698 Compiler for C supports arguments -Wpointer-arith: YES 00:02:07.698 Compiler for C supports arguments -Wsign-compare: YES 00:02:07.698 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:07.698 Compiler for C supports arguments -Wundef: YES 00:02:07.698 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.698 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:07.698 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:07.698 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.698 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:07.698 Program objdump found: YES (/usr/bin/objdump) 00:02:07.698 Compiler for C supports arguments -mavx512f: YES 00:02:07.698 Checking if "AVX512 checking" compiles: YES 00:02:07.698 Fetching value of define "__SSE4_2__" : 1 00:02:07.698 Fetching value of define "__AES__" : 1 00:02:07.698 Fetching value of define "__AVX__" : 1 00:02:07.698 Fetching value of define "__AVX2__" : 1 00:02:07.698 Fetching value of define "__AVX512BW__" : 1 00:02:07.698 Fetching value of define "__AVX512CD__" : 1 00:02:07.698 Fetching value of define "__AVX512DQ__" : 1 00:02:07.698 Fetching value of define "__AVX512F__" : 1 00:02:07.698 Fetching value of define "__AVX512VL__" : 1 00:02:07.698 Fetching value of define "__PCLMUL__" : 1 00:02:07.698 Fetching value of define "__RDRND__" : 1 00:02:07.698 Fetching value of define "__RDSEED__" : 1 00:02:07.698 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:07.698 Fetching value of define "__znver1__" : (undefined) 00:02:07.698 Fetching value of define "__znver2__" : (undefined) 00:02:07.698 Fetching value of define "__znver3__" : (undefined) 00:02:07.698 Fetching value of define "__znver4__" : (undefined) 00:02:07.698 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:07.698 Message: lib/log: Defining dependency "log" 00:02:07.698 Message: lib/kvargs: Defining dependency "kvargs" 00:02:07.698 Message: lib/telemetry: Defining dependency "telemetry" 00:02:07.698 Checking for function "getentropy" : NO 00:02:07.698 Message: lib/eal: Defining dependency "eal" 00:02:07.698 Message: lib/ring: Defining dependency "ring" 00:02:07.698 Message: lib/rcu: Defining dependency "rcu" 00:02:07.698 Message: lib/mempool: Defining dependency "mempool" 00:02:07.698 Message: lib/mbuf: Defining dependency "mbuf" 00:02:07.698 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:07.698 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.699 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:07.699 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:07.699 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:07.699 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:07.699 Compiler for C supports arguments -mpclmul: YES 00:02:07.699 Compiler for C supports arguments -maes: YES 00:02:07.699 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.699 Compiler for C supports arguments -mavx512bw: YES 00:02:07.699 Compiler for C supports arguments -mavx512dq: YES 00:02:07.699 Compiler for C supports arguments -mavx512vl: YES 00:02:07.699 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:07.699 Compiler for C supports arguments -mavx2: YES 00:02:07.699 Compiler for C supports arguments -mavx: YES 00:02:07.699 Message: lib/net: Defining dependency "net" 00:02:07.699 Message: lib/meter: Defining dependency "meter" 00:02:07.699 Message: lib/ethdev: Defining dependency "ethdev" 00:02:07.699 Message: lib/pci: Defining dependency "pci" 00:02:07.699 Message: lib/cmdline: Defining dependency "cmdline" 00:02:07.699 Message: lib/hash: Defining dependency "hash" 00:02:07.699 Message: lib/timer: Defining dependency "timer" 00:02:07.699 Message: lib/compressdev: Defining dependency "compressdev" 00:02:07.699 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:07.699 Message: lib/dmadev: Defining dependency "dmadev" 00:02:07.699 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:07.699 Message: lib/power: Defining dependency "power" 00:02:07.699 Message: lib/reorder: Defining dependency "reorder" 00:02:07.699 Message: lib/security: Defining dependency "security" 00:02:07.699 Has header "linux/userfaultfd.h" : YES 00:02:07.699 Has header "linux/vduse.h" : YES 00:02:07.699 Message: lib/vhost: Defining dependency "vhost" 00:02:07.699 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:07.699 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:07.699 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:07.699 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:07.699 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:07.699 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:07.699 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:07.699 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:07.699 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:07.699 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:07.699 Program doxygen found: YES (/usr/bin/doxygen) 00:02:07.699 Configuring doxy-api-html.conf using configuration 00:02:07.699 Configuring doxy-api-man.conf using configuration 00:02:07.699 Program mandb found: YES (/usr/bin/mandb) 00:02:07.699 Program sphinx-build found: NO 00:02:07.699 Configuring rte_build_config.h using configuration 00:02:07.699 Message: 00:02:07.699 ================= 00:02:07.699 Applications Enabled 00:02:07.699 ================= 00:02:07.699 00:02:07.699 apps: 00:02:07.699 00:02:07.699 00:02:07.699 Message: 00:02:07.699 ================= 00:02:07.699 Libraries Enabled 00:02:07.699 ================= 00:02:07.699 00:02:07.699 libs: 00:02:07.699 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:07.699 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:07.699 cryptodev, dmadev, power, reorder, security, vhost, 00:02:07.699 00:02:07.699 Message: 00:02:07.699 =============== 00:02:07.699 Drivers Enabled 00:02:07.699 =============== 00:02:07.699 00:02:07.699 common: 00:02:07.699 00:02:07.699 bus: 00:02:07.699 pci, vdev, 00:02:07.699 mempool: 00:02:07.699 ring, 00:02:07.699 dma: 00:02:07.699 00:02:07.699 net: 00:02:07.699 00:02:07.699 crypto: 00:02:07.699 00:02:07.699 compress: 00:02:07.699 00:02:07.699 vdpa: 00:02:07.699 00:02:07.699 00:02:07.699 Message: 00:02:07.699 ================= 00:02:07.699 Content Skipped 00:02:07.699 ================= 00:02:07.699 00:02:07.699 apps: 00:02:07.699 dumpcap: explicitly disabled via build config 00:02:07.699 graph: explicitly disabled via build config 00:02:07.699 pdump: explicitly disabled via build config 00:02:07.699 proc-info: explicitly disabled via build config 00:02:07.699 test-acl: explicitly disabled via build config 00:02:07.699 test-bbdev: explicitly disabled via build config 00:02:07.699 test-cmdline: explicitly disabled via build config 00:02:07.699 test-compress-perf: explicitly disabled via build config 00:02:07.699 test-crypto-perf: explicitly disabled via build config 00:02:07.699 test-dma-perf: explicitly disabled via build config 00:02:07.699 test-eventdev: explicitly disabled via build config 00:02:07.699 test-fib: explicitly disabled via build config 00:02:07.699 test-flow-perf: explicitly disabled via build config 00:02:07.699 test-gpudev: explicitly disabled via build config 00:02:07.699 test-mldev: explicitly disabled via build config 00:02:07.699 test-pipeline: explicitly disabled via build config 00:02:07.699 test-pmd: explicitly disabled via build config 00:02:07.699 test-regex: explicitly disabled via build config 00:02:07.699 test-sad: explicitly disabled via build config 00:02:07.699 test-security-perf: explicitly disabled via build config 00:02:07.699 00:02:07.699 libs: 00:02:07.699 argparse: explicitly disabled via build config 00:02:07.699 metrics: explicitly disabled via build config 00:02:07.699 acl: explicitly disabled via build config 00:02:07.699 bbdev: explicitly disabled via build config 00:02:07.699 bitratestats: explicitly disabled via build config 00:02:07.699 bpf: explicitly disabled via build config 00:02:07.699 cfgfile: explicitly disabled via build config 00:02:07.699 distributor: explicitly disabled via build config 00:02:07.699 efd: explicitly disabled via build config 00:02:07.699 eventdev: explicitly disabled via build config 00:02:07.699 dispatcher: explicitly disabled via build config 00:02:07.699 gpudev: explicitly disabled via build config 00:02:07.699 gro: explicitly disabled via build config 00:02:07.699 gso: explicitly disabled via build config 00:02:07.699 ip_frag: explicitly disabled via build config 00:02:07.699 jobstats: explicitly disabled via build config 00:02:07.699 latencystats: explicitly disabled via build config 00:02:07.699 lpm: explicitly disabled via build config 00:02:07.699 member: explicitly disabled via build config 00:02:07.699 pcapng: explicitly disabled via build config 00:02:07.699 rawdev: explicitly disabled via build config 00:02:07.699 regexdev: explicitly disabled via build config 00:02:07.699 mldev: explicitly disabled via build config 00:02:07.699 rib: explicitly disabled via build config 00:02:07.699 sched: explicitly disabled via build config 00:02:07.699 stack: explicitly disabled via build config 00:02:07.699 ipsec: explicitly disabled via build config 00:02:07.699 pdcp: explicitly disabled via build config 00:02:07.699 fib: explicitly disabled via build config 00:02:07.699 port: explicitly disabled via build config 00:02:07.699 pdump: explicitly disabled via build config 00:02:07.699 table: explicitly disabled via build config 00:02:07.699 pipeline: explicitly disabled via build config 00:02:07.699 graph: explicitly disabled via build config 00:02:07.699 node: explicitly disabled via build config 00:02:07.699 00:02:07.699 drivers: 00:02:07.699 common/cpt: not in enabled drivers build config 00:02:07.699 common/dpaax: not in enabled drivers build config 00:02:07.699 common/iavf: not in enabled drivers build config 00:02:07.699 common/idpf: not in enabled drivers build config 00:02:07.699 common/ionic: not in enabled drivers build config 00:02:07.699 common/mvep: not in enabled drivers build config 00:02:07.699 common/octeontx: not in enabled drivers build config 00:02:07.699 bus/auxiliary: not in enabled drivers build config 00:02:07.699 bus/cdx: not in enabled drivers build config 00:02:07.699 bus/dpaa: not in enabled drivers build config 00:02:07.699 bus/fslmc: not in enabled drivers build config 00:02:07.699 bus/ifpga: not in enabled drivers build config 00:02:07.699 bus/platform: not in enabled drivers build config 00:02:07.699 bus/uacce: not in enabled drivers build config 00:02:07.699 bus/vmbus: not in enabled drivers build config 00:02:07.699 common/cnxk: not in enabled drivers build config 00:02:07.699 common/mlx5: not in enabled drivers build config 00:02:07.699 common/nfp: not in enabled drivers build config 00:02:07.699 common/nitrox: not in enabled drivers build config 00:02:07.699 common/qat: not in enabled drivers build config 00:02:07.699 common/sfc_efx: not in enabled drivers build config 00:02:07.699 mempool/bucket: not in enabled drivers build config 00:02:07.699 mempool/cnxk: not in enabled drivers build config 00:02:07.699 mempool/dpaa: not in enabled drivers build config 00:02:07.699 mempool/dpaa2: not in enabled drivers build config 00:02:07.699 mempool/octeontx: not in enabled drivers build config 00:02:07.699 mempool/stack: not in enabled drivers build config 00:02:07.699 dma/cnxk: not in enabled drivers build config 00:02:07.699 dma/dpaa: not in enabled drivers build config 00:02:07.699 dma/dpaa2: not in enabled drivers build config 00:02:07.699 dma/hisilicon: not in enabled drivers build config 00:02:07.699 dma/idxd: not in enabled drivers build config 00:02:07.699 dma/ioat: not in enabled drivers build config 00:02:07.699 dma/skeleton: not in enabled drivers build config 00:02:07.699 net/af_packet: not in enabled drivers build config 00:02:07.699 net/af_xdp: not in enabled drivers build config 00:02:07.699 net/ark: not in enabled drivers build config 00:02:07.699 net/atlantic: not in enabled drivers build config 00:02:07.699 net/avp: not in enabled drivers build config 00:02:07.699 net/axgbe: not in enabled drivers build config 00:02:07.699 net/bnx2x: not in enabled drivers build config 00:02:07.699 net/bnxt: not in enabled drivers build config 00:02:07.699 net/bonding: not in enabled drivers build config 00:02:07.699 net/cnxk: not in enabled drivers build config 00:02:07.699 net/cpfl: not in enabled drivers build config 00:02:07.699 net/cxgbe: not in enabled drivers build config 00:02:07.699 net/dpaa: not in enabled drivers build config 00:02:07.699 net/dpaa2: not in enabled drivers build config 00:02:07.699 net/e1000: not in enabled drivers build config 00:02:07.699 net/ena: not in enabled drivers build config 00:02:07.699 net/enetc: not in enabled drivers build config 00:02:07.699 net/enetfec: not in enabled drivers build config 00:02:07.699 net/enic: not in enabled drivers build config 00:02:07.699 net/failsafe: not in enabled drivers build config 00:02:07.699 net/fm10k: not in enabled drivers build config 00:02:07.699 net/gve: not in enabled drivers build config 00:02:07.699 net/hinic: not in enabled drivers build config 00:02:07.699 net/hns3: not in enabled drivers build config 00:02:07.699 net/i40e: not in enabled drivers build config 00:02:07.699 net/iavf: not in enabled drivers build config 00:02:07.699 net/ice: not in enabled drivers build config 00:02:07.699 net/idpf: not in enabled drivers build config 00:02:07.699 net/igc: not in enabled drivers build config 00:02:07.699 net/ionic: not in enabled drivers build config 00:02:07.699 net/ipn3ke: not in enabled drivers build config 00:02:07.699 net/ixgbe: not in enabled drivers build config 00:02:07.699 net/mana: not in enabled drivers build config 00:02:07.699 net/memif: not in enabled drivers build config 00:02:07.699 net/mlx4: not in enabled drivers build config 00:02:07.699 net/mlx5: not in enabled drivers build config 00:02:07.699 net/mvneta: not in enabled drivers build config 00:02:07.699 net/mvpp2: not in enabled drivers build config 00:02:07.699 net/netvsc: not in enabled drivers build config 00:02:07.699 net/nfb: not in enabled drivers build config 00:02:07.699 net/nfp: not in enabled drivers build config 00:02:07.699 net/ngbe: not in enabled drivers build config 00:02:07.699 net/null: not in enabled drivers build config 00:02:07.699 net/octeontx: not in enabled drivers build config 00:02:07.699 net/octeon_ep: not in enabled drivers build config 00:02:07.699 net/pcap: not in enabled drivers build config 00:02:07.699 net/pfe: not in enabled drivers build config 00:02:07.699 net/qede: not in enabled drivers build config 00:02:07.699 net/ring: not in enabled drivers build config 00:02:07.699 net/sfc: not in enabled drivers build config 00:02:07.699 net/softnic: not in enabled drivers build config 00:02:07.699 net/tap: not in enabled drivers build config 00:02:07.699 net/thunderx: not in enabled drivers build config 00:02:07.699 net/txgbe: not in enabled drivers build config 00:02:07.699 net/vdev_netvsc: not in enabled drivers build config 00:02:07.699 net/vhost: not in enabled drivers build config 00:02:07.699 net/virtio: not in enabled drivers build config 00:02:07.699 net/vmxnet3: not in enabled drivers build config 00:02:07.699 raw/*: missing internal dependency, "rawdev" 00:02:07.699 crypto/armv8: not in enabled drivers build config 00:02:07.699 crypto/bcmfs: not in enabled drivers build config 00:02:07.699 crypto/caam_jr: not in enabled drivers build config 00:02:07.700 crypto/ccp: not in enabled drivers build config 00:02:07.700 crypto/cnxk: not in enabled drivers build config 00:02:07.700 crypto/dpaa_sec: not in enabled drivers build config 00:02:07.700 crypto/dpaa2_sec: not in enabled drivers build config 00:02:07.700 crypto/ipsec_mb: not in enabled drivers build config 00:02:07.700 crypto/mlx5: not in enabled drivers build config 00:02:07.700 crypto/mvsam: not in enabled drivers build config 00:02:07.700 crypto/nitrox: not in enabled drivers build config 00:02:07.700 crypto/null: not in enabled drivers build config 00:02:07.700 crypto/octeontx: not in enabled drivers build config 00:02:07.700 crypto/openssl: not in enabled drivers build config 00:02:07.700 crypto/scheduler: not in enabled drivers build config 00:02:07.700 crypto/uadk: not in enabled drivers build config 00:02:07.700 crypto/virtio: not in enabled drivers build config 00:02:07.700 compress/isal: not in enabled drivers build config 00:02:07.700 compress/mlx5: not in enabled drivers build config 00:02:07.700 compress/nitrox: not in enabled drivers build config 00:02:07.700 compress/octeontx: not in enabled drivers build config 00:02:07.700 compress/zlib: not in enabled drivers build config 00:02:07.700 regex/*: missing internal dependency, "regexdev" 00:02:07.700 ml/*: missing internal dependency, "mldev" 00:02:07.700 vdpa/ifc: not in enabled drivers build config 00:02:07.700 vdpa/mlx5: not in enabled drivers build config 00:02:07.700 vdpa/nfp: not in enabled drivers build config 00:02:07.700 vdpa/sfc: not in enabled drivers build config 00:02:07.700 event/*: missing internal dependency, "eventdev" 00:02:07.700 baseband/*: missing internal dependency, "bbdev" 00:02:07.700 gpu/*: missing internal dependency, "gpudev" 00:02:07.700 00:02:07.700 00:02:07.700 Build targets in project: 85 00:02:07.700 00:02:07.700 DPDK 24.03.0 00:02:07.700 00:02:07.700 User defined options 00:02:07.700 buildtype : debug 00:02:07.700 default_library : shared 00:02:07.700 libdir : lib 00:02:07.700 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:07.700 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:07.700 c_link_args : 00:02:07.700 cpu_instruction_set: native 00:02:07.700 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:07.700 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:07.700 enable_docs : false 00:02:07.700 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:07.700 enable_kmods : false 00:02:07.700 max_lcores : 128 00:02:07.700 tests : false 00:02:07.700 00:02:07.700 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.700 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:07.956 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.956 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.956 [3/268] Linking static target lib/librte_kvargs.a 00:02:07.956 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:07.956 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.956 [6/268] Linking static target lib/librte_log.a 00:02:08.214 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:08.214 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:08.471 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:08.471 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:08.471 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.471 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:08.471 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.471 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.471 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.471 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:08.471 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.471 [18/268] Linking static target lib/librte_telemetry.a 00:02:09.035 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.035 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.035 [21/268] Linking target lib/librte_log.so.24.1 00:02:09.035 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.035 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.293 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.293 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.293 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:09.293 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:09.293 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.293 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:09.293 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:09.293 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.550 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:09.550 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.550 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:09.807 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:09.807 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.807 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.807 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.807 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.807 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:09.807 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.807 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.066 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.066 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.066 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.066 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.066 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:10.325 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.325 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.325 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.583 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.583 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.583 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.583 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.583 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.583 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.841 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.841 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.841 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.841 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.100 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.100 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.100 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.100 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.100 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.359 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.359 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.359 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.630 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.630 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.630 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:11.630 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.630 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.912 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:11.912 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.912 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:11.912 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.912 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.912 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.912 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:11.912 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.170 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.170 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.170 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.428 [85/268] Linking static target lib/librte_eal.a 00:02:12.428 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.428 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.428 [88/268] Linking static target lib/librte_ring.a 00:02:12.687 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.687 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.687 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:12.687 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:12.687 [93/268] Linking static target lib/librte_rcu.a 00:02:12.944 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.944 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:12.944 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:12.944 [97/268] Linking static target lib/librte_mempool.a 00:02:13.202 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.202 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.202 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.202 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.202 [102/268] Linking static target lib/librte_mbuf.a 00:02:13.202 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.459 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.459 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.459 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.717 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:13.717 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.717 [109/268] Linking static target lib/librte_meter.a 00:02:13.717 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.717 [111/268] Linking static target lib/librte_net.a 00:02:14.286 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.286 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.286 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.286 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.286 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.286 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.286 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.551 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.807 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.063 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.063 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.063 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.318 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.318 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.318 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.318 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.318 [128/268] Linking static target lib/librte_pci.a 00:02:15.318 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.318 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.574 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:15.574 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.574 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:15.574 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:15.574 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:15.574 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.574 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:15.831 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:15.831 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:15.831 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:15.831 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:15.831 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:15.831 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:15.831 [144/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.831 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:15.831 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:15.831 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:15.831 [148/268] Linking static target lib/librte_cmdline.a 00:02:16.396 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.396 [150/268] Linking static target lib/librte_timer.a 00:02:16.397 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:16.397 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:16.397 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:16.397 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.654 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.654 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:16.654 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.654 [158/268] Linking static target lib/librte_hash.a 00:02:16.654 [159/268] Linking static target lib/librte_ethdev.a 00:02:16.913 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:16.913 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:16.913 [162/268] Linking static target lib/librte_compressdev.a 00:02:16.913 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:16.913 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.172 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:17.172 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:17.172 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:17.172 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:17.430 [169/268] Linking static target lib/librte_dmadev.a 00:02:17.430 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:17.430 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:17.687 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:17.687 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:17.687 [174/268] Linking static target lib/librte_cryptodev.a 00:02:17.687 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:17.953 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.953 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:17.953 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:18.216 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:18.216 [180/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.216 [181/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.216 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:18.216 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:18.474 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.474 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:18.474 [186/268] Linking static target lib/librte_power.a 00:02:18.474 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:18.474 [188/268] Linking static target lib/librte_reorder.a 00:02:18.732 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:18.732 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:18.732 [191/268] Linking static target lib/librte_security.a 00:02:18.732 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:18.732 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:19.313 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.313 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:19.570 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.570 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.570 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:19.828 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:19.828 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:19.828 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.087 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:20.087 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:20.350 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:20.350 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:20.350 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:20.350 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.350 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:20.350 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:20.350 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:20.609 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:20.609 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:20.609 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:20.609 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.609 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.609 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:20.609 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:20.609 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.609 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.865 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:20.865 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:20.865 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:20.865 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.865 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:20.865 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.865 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.121 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:21.379 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.949 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:21.949 [230/268] Linking static target lib/librte_vhost.a 00:02:23.335 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.707 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.707 [233/268] Linking target lib/librte_eal.so.24.1 00:02:24.707 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:24.964 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:24.964 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.964 [237/268] Linking target lib/librte_meter.so.24.1 00:02:24.964 [238/268] Linking target lib/librte_ring.so.24.1 00:02:24.964 [239/268] Linking target lib/librte_timer.so.24.1 00:02:24.964 [240/268] Linking target lib/librte_pci.so.24.1 00:02:24.964 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.964 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.964 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.964 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.964 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.964 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:24.964 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:24.964 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.221 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.221 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.222 [251/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.222 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:25.222 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.498 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.498 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:25.498 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:25.498 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:25.498 [258/268] Linking target lib/librte_net.so.24.1 00:02:25.755 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.755 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.755 [261/268] Linking target lib/librte_hash.so.24.1 00:02:25.755 [262/268] Linking target lib/librte_security.so.24.1 00:02:25.755 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:25.755 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:26.013 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.013 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.013 [267/268] Linking target lib/librte_power.so.24.1 00:02:26.013 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:26.013 INFO: autodetecting backend as ninja 00:02:26.013 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:27.387 CC lib/ut/ut.o 00:02:27.387 CC lib/log/log.o 00:02:27.387 CC lib/log/log_flags.o 00:02:27.387 CC lib/log/log_deprecated.o 00:02:27.387 CC lib/ut_mock/mock.o 00:02:27.387 LIB libspdk_ut_mock.a 00:02:27.387 LIB libspdk_log.a 00:02:27.387 LIB libspdk_ut.a 00:02:27.387 SO libspdk_ut_mock.so.6.0 00:02:27.387 SO libspdk_log.so.7.0 00:02:27.387 SO libspdk_ut.so.2.0 00:02:27.387 SYMLINK libspdk_ut.so 00:02:27.387 SYMLINK libspdk_ut_mock.so 00:02:27.387 SYMLINK libspdk_log.so 00:02:27.645 CC lib/ioat/ioat.o 00:02:27.645 CC lib/util/base64.o 00:02:27.645 CC lib/dma/dma.o 00:02:27.645 CC lib/util/bit_array.o 00:02:27.645 CC lib/util/cpuset.o 00:02:27.645 CC lib/util/crc32.o 00:02:27.645 CC lib/util/crc16.o 00:02:27.645 CC lib/util/crc32c.o 00:02:27.645 CXX lib/trace_parser/trace.o 00:02:27.902 CC lib/vfio_user/host/vfio_user_pci.o 00:02:27.902 CC lib/util/crc32_ieee.o 00:02:27.902 CC lib/vfio_user/host/vfio_user.o 00:02:27.902 CC lib/util/crc64.o 00:02:27.902 CC lib/util/dif.o 00:02:27.902 LIB libspdk_dma.a 00:02:27.902 CC lib/util/fd.o 00:02:27.902 CC lib/util/fd_group.o 00:02:27.902 LIB libspdk_ioat.a 00:02:27.902 SO libspdk_dma.so.4.0 00:02:28.160 SO libspdk_ioat.so.7.0 00:02:28.160 CC lib/util/file.o 00:02:28.160 SYMLINK libspdk_dma.so 00:02:28.160 CC lib/util/hexlify.o 00:02:28.160 SYMLINK libspdk_ioat.so 00:02:28.160 CC lib/util/iov.o 00:02:28.160 CC lib/util/math.o 00:02:28.160 CC lib/util/net.o 00:02:28.160 LIB libspdk_vfio_user.a 00:02:28.160 CC lib/util/pipe.o 00:02:28.160 SO libspdk_vfio_user.so.5.0 00:02:28.160 CC lib/util/strerror_tls.o 00:02:28.160 CC lib/util/string.o 00:02:28.160 CC lib/util/uuid.o 00:02:28.160 SYMLINK libspdk_vfio_user.so 00:02:28.160 CC lib/util/xor.o 00:02:28.420 CC lib/util/zipf.o 00:02:28.420 LIB libspdk_util.a 00:02:28.678 SO libspdk_util.so.10.0 00:02:28.936 SYMLINK libspdk_util.so 00:02:28.936 LIB libspdk_trace_parser.a 00:02:28.936 SO libspdk_trace_parser.so.5.0 00:02:28.936 SYMLINK libspdk_trace_parser.so 00:02:28.936 CC lib/vmd/vmd.o 00:02:28.936 CC lib/vmd/led.o 00:02:28.936 CC lib/idxd/idxd.o 00:02:28.936 CC lib/idxd/idxd_user.o 00:02:28.936 CC lib/idxd/idxd_kernel.o 00:02:28.936 CC lib/env_dpdk/env.o 00:02:28.936 CC lib/rdma_provider/common.o 00:02:28.936 CC lib/conf/conf.o 00:02:28.936 CC lib/rdma_utils/rdma_utils.o 00:02:28.936 CC lib/json/json_parse.o 00:02:29.194 CC lib/json/json_util.o 00:02:29.194 CC lib/json/json_write.o 00:02:29.194 CC lib/env_dpdk/memory.o 00:02:29.194 CC lib/env_dpdk/pci.o 00:02:29.194 LIB libspdk_rdma_utils.a 00:02:29.194 LIB libspdk_conf.a 00:02:29.194 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:29.194 SO libspdk_rdma_utils.so.1.0 00:02:29.453 SO libspdk_conf.so.6.0 00:02:29.453 SYMLINK libspdk_rdma_utils.so 00:02:29.453 SYMLINK libspdk_conf.so 00:02:29.453 CC lib/env_dpdk/init.o 00:02:29.453 CC lib/env_dpdk/threads.o 00:02:29.453 CC lib/env_dpdk/pci_ioat.o 00:02:29.453 LIB libspdk_json.a 00:02:29.453 SO libspdk_json.so.6.0 00:02:29.453 LIB libspdk_rdma_provider.a 00:02:29.453 LIB libspdk_idxd.a 00:02:29.453 CC lib/env_dpdk/pci_virtio.o 00:02:29.453 CC lib/env_dpdk/pci_vmd.o 00:02:29.453 SO libspdk_rdma_provider.so.6.0 00:02:29.453 SO libspdk_idxd.so.12.0 00:02:29.453 SYMLINK libspdk_json.so 00:02:29.711 CC lib/env_dpdk/pci_idxd.o 00:02:29.711 LIB libspdk_vmd.a 00:02:29.711 SYMLINK libspdk_rdma_provider.so 00:02:29.711 SYMLINK libspdk_idxd.so 00:02:29.711 CC lib/env_dpdk/pci_event.o 00:02:29.711 CC lib/env_dpdk/sigbus_handler.o 00:02:29.711 CC lib/env_dpdk/pci_dpdk.o 00:02:29.711 SO libspdk_vmd.so.6.0 00:02:29.711 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:29.711 SYMLINK libspdk_vmd.so 00:02:29.711 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:29.711 CC lib/jsonrpc/jsonrpc_server.o 00:02:29.711 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:29.711 CC lib/jsonrpc/jsonrpc_client.o 00:02:29.711 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:29.970 LIB libspdk_jsonrpc.a 00:02:29.970 SO libspdk_jsonrpc.so.6.0 00:02:30.229 SYMLINK libspdk_jsonrpc.so 00:02:30.487 LIB libspdk_env_dpdk.a 00:02:30.487 CC lib/rpc/rpc.o 00:02:30.487 SO libspdk_env_dpdk.so.15.0 00:02:30.745 SYMLINK libspdk_env_dpdk.so 00:02:30.745 LIB libspdk_rpc.a 00:02:30.745 SO libspdk_rpc.so.6.0 00:02:30.745 SYMLINK libspdk_rpc.so 00:02:31.003 CC lib/notify/notify.o 00:02:31.003 CC lib/notify/notify_rpc.o 00:02:31.003 CC lib/trace/trace.o 00:02:31.003 CC lib/trace/trace_rpc.o 00:02:31.003 CC lib/trace/trace_flags.o 00:02:31.003 CC lib/keyring/keyring.o 00:02:31.003 CC lib/keyring/keyring_rpc.o 00:02:31.262 LIB libspdk_notify.a 00:02:31.262 SO libspdk_notify.so.6.0 00:02:31.262 LIB libspdk_trace.a 00:02:31.262 SYMLINK libspdk_notify.so 00:02:31.262 LIB libspdk_keyring.a 00:02:31.262 SO libspdk_keyring.so.1.0 00:02:31.262 SO libspdk_trace.so.10.0 00:02:31.521 SYMLINK libspdk_trace.so 00:02:31.521 SYMLINK libspdk_keyring.so 00:02:31.779 CC lib/thread/thread.o 00:02:31.779 CC lib/thread/iobuf.o 00:02:31.779 CC lib/sock/sock.o 00:02:31.779 CC lib/sock/sock_rpc.o 00:02:32.037 LIB libspdk_sock.a 00:02:32.296 SO libspdk_sock.so.10.0 00:02:32.296 SYMLINK libspdk_sock.so 00:02:32.554 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:32.554 CC lib/nvme/nvme_fabric.o 00:02:32.554 CC lib/nvme/nvme_ctrlr.o 00:02:32.554 CC lib/nvme/nvme_ns_cmd.o 00:02:32.554 CC lib/nvme/nvme_ns.o 00:02:32.554 CC lib/nvme/nvme_pcie.o 00:02:32.554 CC lib/nvme/nvme_pcie_common.o 00:02:32.554 CC lib/nvme/nvme.o 00:02:32.554 CC lib/nvme/nvme_qpair.o 00:02:33.515 LIB libspdk_thread.a 00:02:33.515 CC lib/nvme/nvme_quirks.o 00:02:33.515 SO libspdk_thread.so.10.1 00:02:33.515 CC lib/nvme/nvme_transport.o 00:02:33.515 SYMLINK libspdk_thread.so 00:02:33.515 CC lib/nvme/nvme_discovery.o 00:02:33.515 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:33.515 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:33.515 CC lib/nvme/nvme_tcp.o 00:02:33.515 CC lib/nvme/nvme_opal.o 00:02:33.810 CC lib/accel/accel.o 00:02:33.810 CC lib/blob/blobstore.o 00:02:34.068 CC lib/blob/request.o 00:02:34.068 CC lib/init/json_config.o 00:02:34.068 CC lib/blob/zeroes.o 00:02:34.068 CC lib/init/subsystem.o 00:02:34.068 CC lib/init/subsystem_rpc.o 00:02:34.068 CC lib/init/rpc.o 00:02:34.326 CC lib/nvme/nvme_io_msg.o 00:02:34.326 CC lib/nvme/nvme_poll_group.o 00:02:34.326 CC lib/blob/blob_bs_dev.o 00:02:34.326 CC lib/accel/accel_rpc.o 00:02:34.326 CC lib/nvme/nvme_zns.o 00:02:34.326 LIB libspdk_init.a 00:02:34.585 SO libspdk_init.so.5.0 00:02:34.585 CC lib/nvme/nvme_stubs.o 00:02:34.585 CC lib/nvme/nvme_auth.o 00:02:34.585 SYMLINK libspdk_init.so 00:02:34.844 CC lib/accel/accel_sw.o 00:02:34.844 CC lib/virtio/virtio.o 00:02:34.844 CC lib/virtio/virtio_vhost_user.o 00:02:34.844 CC lib/virtio/virtio_vfio_user.o 00:02:35.104 CC lib/nvme/nvme_cuse.o 00:02:35.104 CC lib/virtio/virtio_pci.o 00:02:35.104 LIB libspdk_accel.a 00:02:35.104 CC lib/nvme/nvme_rdma.o 00:02:35.104 SO libspdk_accel.so.16.0 00:02:35.104 SYMLINK libspdk_accel.so 00:02:35.362 CC lib/event/app.o 00:02:35.362 CC lib/event/reactor.o 00:02:35.362 CC lib/event/log_rpc.o 00:02:35.362 CC lib/event/app_rpc.o 00:02:35.362 LIB libspdk_virtio.a 00:02:35.362 CC lib/bdev/bdev.o 00:02:35.362 CC lib/bdev/bdev_rpc.o 00:02:35.362 CC lib/event/scheduler_static.o 00:02:35.362 SO libspdk_virtio.so.7.0 00:02:35.362 CC lib/bdev/bdev_zone.o 00:02:35.621 SYMLINK libspdk_virtio.so 00:02:35.621 CC lib/bdev/part.o 00:02:35.621 CC lib/bdev/scsi_nvme.o 00:02:35.621 LIB libspdk_event.a 00:02:35.621 SO libspdk_event.so.14.0 00:02:35.879 SYMLINK libspdk_event.so 00:02:36.446 LIB libspdk_nvme.a 00:02:36.446 LIB libspdk_blob.a 00:02:36.446 SO libspdk_nvme.so.13.1 00:02:36.705 SO libspdk_blob.so.11.0 00:02:36.705 SYMLINK libspdk_blob.so 00:02:36.990 SYMLINK libspdk_nvme.so 00:02:36.990 CC lib/blobfs/blobfs.o 00:02:36.990 CC lib/blobfs/tree.o 00:02:36.990 CC lib/lvol/lvol.o 00:02:37.925 LIB libspdk_blobfs.a 00:02:37.925 SO libspdk_blobfs.so.10.0 00:02:37.925 SYMLINK libspdk_blobfs.so 00:02:37.925 LIB libspdk_lvol.a 00:02:37.925 LIB libspdk_bdev.a 00:02:38.184 SO libspdk_lvol.so.10.0 00:02:38.184 SO libspdk_bdev.so.16.0 00:02:38.184 SYMLINK libspdk_lvol.so 00:02:38.184 SYMLINK libspdk_bdev.so 00:02:38.442 CC lib/nbd/nbd.o 00:02:38.442 CC lib/nbd/nbd_rpc.o 00:02:38.442 CC lib/scsi/dev.o 00:02:38.442 CC lib/ftl/ftl_core.o 00:02:38.442 CC lib/scsi/lun.o 00:02:38.442 CC lib/scsi/port.o 00:02:38.442 CC lib/ftl/ftl_init.o 00:02:38.442 CC lib/ftl/ftl_layout.o 00:02:38.442 CC lib/ublk/ublk.o 00:02:38.442 CC lib/nvmf/ctrlr.o 00:02:38.700 CC lib/ublk/ublk_rpc.o 00:02:38.700 CC lib/scsi/scsi.o 00:02:38.700 CC lib/scsi/scsi_bdev.o 00:02:38.700 CC lib/scsi/scsi_pr.o 00:02:38.700 CC lib/ftl/ftl_debug.o 00:02:38.958 CC lib/ftl/ftl_io.o 00:02:38.958 CC lib/scsi/scsi_rpc.o 00:02:38.958 CC lib/scsi/task.o 00:02:38.958 CC lib/nvmf/ctrlr_discovery.o 00:02:38.958 LIB libspdk_nbd.a 00:02:38.958 CC lib/nvmf/ctrlr_bdev.o 00:02:38.958 SO libspdk_nbd.so.7.0 00:02:38.958 CC lib/nvmf/subsystem.o 00:02:38.958 SYMLINK libspdk_nbd.so 00:02:38.958 CC lib/nvmf/nvmf.o 00:02:39.216 CC lib/ftl/ftl_sb.o 00:02:39.216 CC lib/ftl/ftl_l2p.o 00:02:39.216 LIB libspdk_ublk.a 00:02:39.216 CC lib/ftl/ftl_l2p_flat.o 00:02:39.216 SO libspdk_ublk.so.3.0 00:02:39.216 SYMLINK libspdk_ublk.so 00:02:39.216 CC lib/nvmf/nvmf_rpc.o 00:02:39.216 LIB libspdk_scsi.a 00:02:39.216 CC lib/nvmf/transport.o 00:02:39.216 SO libspdk_scsi.so.9.0 00:02:39.473 CC lib/nvmf/tcp.o 00:02:39.473 CC lib/ftl/ftl_nv_cache.o 00:02:39.473 SYMLINK libspdk_scsi.so 00:02:39.473 CC lib/ftl/ftl_band.o 00:02:39.473 CC lib/ftl/ftl_band_ops.o 00:02:39.731 CC lib/nvmf/stubs.o 00:02:39.731 CC lib/ftl/ftl_writer.o 00:02:39.987 CC lib/ftl/ftl_rq.o 00:02:39.987 CC lib/ftl/ftl_reloc.o 00:02:39.987 CC lib/nvmf/mdns_server.o 00:02:39.987 CC lib/ftl/ftl_l2p_cache.o 00:02:40.245 CC lib/ftl/ftl_p2l.o 00:02:40.245 CC lib/ftl/mngt/ftl_mngt.o 00:02:40.245 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:40.245 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:40.245 CC lib/iscsi/conn.o 00:02:40.245 CC lib/iscsi/init_grp.o 00:02:40.245 CC lib/iscsi/iscsi.o 00:02:40.503 CC lib/iscsi/md5.o 00:02:40.503 CC lib/iscsi/param.o 00:02:40.503 CC lib/iscsi/portal_grp.o 00:02:40.503 CC lib/iscsi/tgt_node.o 00:02:40.503 CC lib/iscsi/iscsi_subsystem.o 00:02:40.760 CC lib/iscsi/iscsi_rpc.o 00:02:40.760 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:40.760 CC lib/iscsi/task.o 00:02:40.760 CC lib/vhost/vhost.o 00:02:40.760 CC lib/nvmf/rdma.o 00:02:40.760 CC lib/nvmf/auth.o 00:02:40.760 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:41.018 CC lib/vhost/vhost_rpc.o 00:02:41.018 CC lib/vhost/vhost_scsi.o 00:02:41.018 CC lib/vhost/vhost_blk.o 00:02:41.018 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:41.018 CC lib/vhost/rte_vhost_user.o 00:02:41.276 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:41.276 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:41.276 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:41.533 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:41.533 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:41.533 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:41.791 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:41.791 CC lib/ftl/utils/ftl_conf.o 00:02:41.791 CC lib/ftl/utils/ftl_md.o 00:02:41.791 LIB libspdk_iscsi.a 00:02:41.791 CC lib/ftl/utils/ftl_mempool.o 00:02:41.791 SO libspdk_iscsi.so.8.0 00:02:42.049 CC lib/ftl/utils/ftl_bitmap.o 00:02:42.049 CC lib/ftl/utils/ftl_property.o 00:02:42.049 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:42.049 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:42.049 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:42.049 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:42.049 SYMLINK libspdk_iscsi.so 00:02:42.049 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:42.049 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:42.317 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:42.317 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:42.317 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:42.317 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:42.317 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:42.317 LIB libspdk_vhost.a 00:02:42.317 CC lib/ftl/base/ftl_base_dev.o 00:02:42.317 CC lib/ftl/base/ftl_base_bdev.o 00:02:42.317 CC lib/ftl/ftl_trace.o 00:02:42.317 SO libspdk_vhost.so.8.0 00:02:42.631 SYMLINK libspdk_vhost.so 00:02:42.631 LIB libspdk_ftl.a 00:02:42.901 SO libspdk_ftl.so.9.0 00:02:42.901 LIB libspdk_nvmf.a 00:02:43.159 SO libspdk_nvmf.so.19.0 00:02:43.159 SYMLINK libspdk_ftl.so 00:02:43.418 SYMLINK libspdk_nvmf.so 00:02:43.676 CC module/env_dpdk/env_dpdk_rpc.o 00:02:43.676 CC module/sock/posix/posix.o 00:02:43.676 CC module/accel/dsa/accel_dsa.o 00:02:43.676 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:43.676 CC module/accel/error/accel_error.o 00:02:43.676 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:43.676 CC module/accel/ioat/accel_ioat.o 00:02:43.676 CC module/accel/iaa/accel_iaa.o 00:02:43.676 CC module/keyring/file/keyring.o 00:02:43.676 CC module/blob/bdev/blob_bdev.o 00:02:43.934 LIB libspdk_env_dpdk_rpc.a 00:02:43.934 SO libspdk_env_dpdk_rpc.so.6.0 00:02:43.934 CC module/keyring/file/keyring_rpc.o 00:02:43.934 LIB libspdk_scheduler_dpdk_governor.a 00:02:43.934 CC module/accel/ioat/accel_ioat_rpc.o 00:02:43.934 CC module/accel/error/accel_error_rpc.o 00:02:43.934 SYMLINK libspdk_env_dpdk_rpc.so 00:02:43.934 CC module/accel/iaa/accel_iaa_rpc.o 00:02:43.934 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:43.934 LIB libspdk_scheduler_dynamic.a 00:02:43.934 CC module/accel/dsa/accel_dsa_rpc.o 00:02:43.934 SO libspdk_scheduler_dynamic.so.4.0 00:02:44.191 LIB libspdk_blob_bdev.a 00:02:44.191 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:44.191 LIB libspdk_accel_ioat.a 00:02:44.191 SYMLINK libspdk_scheduler_dynamic.so 00:02:44.191 SO libspdk_blob_bdev.so.11.0 00:02:44.191 LIB libspdk_accel_error.a 00:02:44.191 LIB libspdk_keyring_file.a 00:02:44.191 LIB libspdk_accel_iaa.a 00:02:44.191 SO libspdk_accel_ioat.so.6.0 00:02:44.191 SO libspdk_accel_error.so.2.0 00:02:44.191 SO libspdk_accel_iaa.so.3.0 00:02:44.191 SO libspdk_keyring_file.so.1.0 00:02:44.191 LIB libspdk_accel_dsa.a 00:02:44.191 SYMLINK libspdk_blob_bdev.so 00:02:44.191 CC module/keyring/linux/keyring.o 00:02:44.191 CC module/keyring/linux/keyring_rpc.o 00:02:44.191 SYMLINK libspdk_accel_ioat.so 00:02:44.191 SYMLINK libspdk_accel_error.so 00:02:44.191 SO libspdk_accel_dsa.so.5.0 00:02:44.191 SYMLINK libspdk_keyring_file.so 00:02:44.191 SYMLINK libspdk_accel_iaa.so 00:02:44.191 CC module/sock/uring/uring.o 00:02:44.191 CC module/scheduler/gscheduler/gscheduler.o 00:02:44.191 SYMLINK libspdk_accel_dsa.so 00:02:44.449 LIB libspdk_keyring_linux.a 00:02:44.449 SO libspdk_keyring_linux.so.1.0 00:02:44.449 LIB libspdk_scheduler_gscheduler.a 00:02:44.449 SYMLINK libspdk_keyring_linux.so 00:02:44.449 LIB libspdk_sock_posix.a 00:02:44.449 SO libspdk_scheduler_gscheduler.so.4.0 00:02:44.449 CC module/blobfs/bdev/blobfs_bdev.o 00:02:44.449 CC module/bdev/malloc/bdev_malloc.o 00:02:44.449 CC module/bdev/gpt/gpt.o 00:02:44.449 CC module/bdev/delay/vbdev_delay.o 00:02:44.449 CC module/bdev/lvol/vbdev_lvol.o 00:02:44.449 SO libspdk_sock_posix.so.6.0 00:02:44.449 CC module/bdev/error/vbdev_error.o 00:02:44.708 SYMLINK libspdk_scheduler_gscheduler.so 00:02:44.708 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:44.708 SYMLINK libspdk_sock_posix.so 00:02:44.708 CC module/bdev/error/vbdev_error_rpc.o 00:02:44.708 CC module/bdev/null/bdev_null.o 00:02:44.708 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:44.708 CC module/bdev/gpt/vbdev_gpt.o 00:02:44.965 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:44.965 LIB libspdk_bdev_error.a 00:02:44.965 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:44.965 SO libspdk_bdev_error.so.6.0 00:02:44.965 LIB libspdk_blobfs_bdev.a 00:02:44.965 LIB libspdk_sock_uring.a 00:02:44.965 CC module/bdev/null/bdev_null_rpc.o 00:02:44.965 SO libspdk_sock_uring.so.5.0 00:02:44.965 SO libspdk_blobfs_bdev.so.6.0 00:02:44.965 SYMLINK libspdk_bdev_error.so 00:02:44.965 LIB libspdk_bdev_gpt.a 00:02:44.966 SYMLINK libspdk_blobfs_bdev.so 00:02:44.966 LIB libspdk_bdev_lvol.a 00:02:44.966 SYMLINK libspdk_sock_uring.so 00:02:44.966 LIB libspdk_bdev_delay.a 00:02:44.966 SO libspdk_bdev_gpt.so.6.0 00:02:44.966 LIB libspdk_bdev_malloc.a 00:02:44.966 SO libspdk_bdev_lvol.so.6.0 00:02:44.966 SO libspdk_bdev_delay.so.6.0 00:02:45.224 SO libspdk_bdev_malloc.so.6.0 00:02:45.224 LIB libspdk_bdev_null.a 00:02:45.224 SYMLINK libspdk_bdev_gpt.so 00:02:45.224 CC module/bdev/nvme/bdev_nvme.o 00:02:45.224 SYMLINK libspdk_bdev_lvol.so 00:02:45.224 SYMLINK libspdk_bdev_delay.so 00:02:45.224 SO libspdk_bdev_null.so.6.0 00:02:45.224 SYMLINK libspdk_bdev_malloc.so 00:02:45.224 CC module/bdev/passthru/vbdev_passthru.o 00:02:45.224 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:45.224 CC module/bdev/raid/bdev_raid.o 00:02:45.224 CC module/bdev/split/vbdev_split.o 00:02:45.224 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:45.224 SYMLINK libspdk_bdev_null.so 00:02:45.224 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:45.483 CC module/bdev/uring/bdev_uring.o 00:02:45.483 CC module/bdev/aio/bdev_aio.o 00:02:45.483 CC module/bdev/ftl/bdev_ftl.o 00:02:45.483 CC module/bdev/split/vbdev_split_rpc.o 00:02:45.483 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:45.483 LIB libspdk_bdev_zone_block.a 00:02:45.741 CC module/bdev/iscsi/bdev_iscsi.o 00:02:45.741 SO libspdk_bdev_zone_block.so.6.0 00:02:45.741 LIB libspdk_bdev_split.a 00:02:45.741 LIB libspdk_bdev_passthru.a 00:02:45.741 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:45.742 SO libspdk_bdev_split.so.6.0 00:02:45.742 CC module/bdev/uring/bdev_uring_rpc.o 00:02:45.742 CC module/bdev/aio/bdev_aio_rpc.o 00:02:45.742 SO libspdk_bdev_passthru.so.6.0 00:02:45.742 SYMLINK libspdk_bdev_zone_block.so 00:02:45.742 CC module/bdev/nvme/nvme_rpc.o 00:02:45.742 SYMLINK libspdk_bdev_split.so 00:02:45.742 CC module/bdev/nvme/bdev_mdns_client.o 00:02:45.742 SYMLINK libspdk_bdev_passthru.so 00:02:45.742 CC module/bdev/nvme/vbdev_opal.o 00:02:46.000 LIB libspdk_bdev_uring.a 00:02:46.000 LIB libspdk_bdev_ftl.a 00:02:46.000 LIB libspdk_bdev_aio.a 00:02:46.000 SO libspdk_bdev_ftl.so.6.0 00:02:46.000 SO libspdk_bdev_uring.so.6.0 00:02:46.000 SO libspdk_bdev_aio.so.6.0 00:02:46.000 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:46.000 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:46.000 SYMLINK libspdk_bdev_ftl.so 00:02:46.000 SYMLINK libspdk_bdev_uring.so 00:02:46.000 SYMLINK libspdk_bdev_aio.so 00:02:46.000 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:46.000 CC module/bdev/raid/bdev_raid_rpc.o 00:02:46.000 CC module/bdev/raid/bdev_raid_sb.o 00:02:46.000 CC module/bdev/raid/raid0.o 00:02:46.000 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:46.000 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:46.258 LIB libspdk_bdev_iscsi.a 00:02:46.258 SO libspdk_bdev_iscsi.so.6.0 00:02:46.258 CC module/bdev/raid/raid1.o 00:02:46.258 CC module/bdev/raid/concat.o 00:02:46.258 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:46.258 SYMLINK libspdk_bdev_iscsi.so 00:02:46.516 LIB libspdk_bdev_raid.a 00:02:46.516 LIB libspdk_bdev_virtio.a 00:02:46.516 SO libspdk_bdev_raid.so.6.0 00:02:46.516 SO libspdk_bdev_virtio.so.6.0 00:02:46.774 SYMLINK libspdk_bdev_virtio.so 00:02:46.774 SYMLINK libspdk_bdev_raid.so 00:02:47.382 LIB libspdk_bdev_nvme.a 00:02:47.641 SO libspdk_bdev_nvme.so.7.0 00:02:47.641 SYMLINK libspdk_bdev_nvme.so 00:02:48.208 CC module/event/subsystems/keyring/keyring.o 00:02:48.208 CC module/event/subsystems/iobuf/iobuf.o 00:02:48.208 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:48.208 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:48.208 CC module/event/subsystems/scheduler/scheduler.o 00:02:48.208 CC module/event/subsystems/vmd/vmd.o 00:02:48.208 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:48.208 CC module/event/subsystems/sock/sock.o 00:02:48.467 LIB libspdk_event_keyring.a 00:02:48.467 LIB libspdk_event_vhost_blk.a 00:02:48.467 LIB libspdk_event_scheduler.a 00:02:48.467 LIB libspdk_event_iobuf.a 00:02:48.467 SO libspdk_event_keyring.so.1.0 00:02:48.467 LIB libspdk_event_vmd.a 00:02:48.467 SO libspdk_event_vhost_blk.so.3.0 00:02:48.467 LIB libspdk_event_sock.a 00:02:48.467 SO libspdk_event_scheduler.so.4.0 00:02:48.467 SO libspdk_event_iobuf.so.3.0 00:02:48.467 SO libspdk_event_sock.so.5.0 00:02:48.467 SO libspdk_event_vmd.so.6.0 00:02:48.467 SYMLINK libspdk_event_keyring.so 00:02:48.467 SYMLINK libspdk_event_vhost_blk.so 00:02:48.467 SYMLINK libspdk_event_scheduler.so 00:02:48.467 SYMLINK libspdk_event_sock.so 00:02:48.467 SYMLINK libspdk_event_iobuf.so 00:02:48.467 SYMLINK libspdk_event_vmd.so 00:02:48.726 CC module/event/subsystems/accel/accel.o 00:02:48.984 LIB libspdk_event_accel.a 00:02:48.984 SO libspdk_event_accel.so.6.0 00:02:49.243 SYMLINK libspdk_event_accel.so 00:02:49.501 CC module/event/subsystems/bdev/bdev.o 00:02:49.759 LIB libspdk_event_bdev.a 00:02:49.759 SO libspdk_event_bdev.so.6.0 00:02:49.759 SYMLINK libspdk_event_bdev.so 00:02:50.017 CC module/event/subsystems/nbd/nbd.o 00:02:50.017 CC module/event/subsystems/scsi/scsi.o 00:02:50.017 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:50.017 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:50.017 CC module/event/subsystems/ublk/ublk.o 00:02:50.275 LIB libspdk_event_nbd.a 00:02:50.275 LIB libspdk_event_scsi.a 00:02:50.275 SO libspdk_event_nbd.so.6.0 00:02:50.275 SO libspdk_event_scsi.so.6.0 00:02:50.275 LIB libspdk_event_ublk.a 00:02:50.275 SYMLINK libspdk_event_nbd.so 00:02:50.275 LIB libspdk_event_nvmf.a 00:02:50.275 SYMLINK libspdk_event_scsi.so 00:02:50.275 SO libspdk_event_ublk.so.3.0 00:02:50.275 SO libspdk_event_nvmf.so.6.0 00:02:50.533 SYMLINK libspdk_event_ublk.so 00:02:50.533 SYMLINK libspdk_event_nvmf.so 00:02:50.533 CC module/event/subsystems/iscsi/iscsi.o 00:02:50.533 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:50.791 LIB libspdk_event_iscsi.a 00:02:50.791 SO libspdk_event_iscsi.so.6.0 00:02:50.791 LIB libspdk_event_vhost_scsi.a 00:02:50.791 SO libspdk_event_vhost_scsi.so.3.0 00:02:50.791 SYMLINK libspdk_event_iscsi.so 00:02:51.050 SYMLINK libspdk_event_vhost_scsi.so 00:02:51.050 SO libspdk.so.6.0 00:02:51.050 SYMLINK libspdk.so 00:02:51.308 CXX app/trace/trace.o 00:02:51.308 CC app/trace_record/trace_record.o 00:02:51.308 CC app/spdk_nvme_perf/perf.o 00:02:51.308 CC app/spdk_lspci/spdk_lspci.o 00:02:51.569 CC app/nvmf_tgt/nvmf_main.o 00:02:51.569 CC app/iscsi_tgt/iscsi_tgt.o 00:02:51.569 CC app/spdk_tgt/spdk_tgt.o 00:02:51.569 CC examples/ioat/perf/perf.o 00:02:51.569 CC test/thread/poller_perf/poller_perf.o 00:02:51.569 CC examples/util/zipf/zipf.o 00:02:51.569 LINK spdk_lspci 00:02:51.827 LINK iscsi_tgt 00:02:51.827 LINK zipf 00:02:51.827 LINK spdk_tgt 00:02:51.827 LINK poller_perf 00:02:51.827 LINK ioat_perf 00:02:51.827 LINK nvmf_tgt 00:02:51.827 LINK spdk_trace_record 00:02:52.085 LINK spdk_trace 00:02:52.085 CC app/spdk_nvme_identify/identify.o 00:02:52.085 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:52.086 CC examples/ioat/verify/verify.o 00:02:52.086 CC app/spdk_nvme_discover/discovery_aer.o 00:02:52.345 CC app/spdk_top/spdk_top.o 00:02:52.345 CC app/spdk_dd/spdk_dd.o 00:02:52.345 CC test/dma/test_dma/test_dma.o 00:02:52.345 CC test/app/bdev_svc/bdev_svc.o 00:02:52.345 LINK verify 00:02:52.345 LINK interrupt_tgt 00:02:52.345 LINK spdk_nvme_discover 00:02:52.604 LINK spdk_nvme_perf 00:02:52.604 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:52.604 LINK bdev_svc 00:02:52.604 CC test/app/histogram_perf/histogram_perf.o 00:02:52.604 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:52.862 LINK test_dma 00:02:52.862 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:52.862 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:52.862 LINK histogram_perf 00:02:52.862 LINK spdk_dd 00:02:52.862 CC examples/thread/thread/thread_ex.o 00:02:52.862 LINK spdk_nvme_identify 00:02:52.862 LINK nvme_fuzz 00:02:53.121 TEST_HEADER include/spdk/accel.h 00:02:53.121 TEST_HEADER include/spdk/accel_module.h 00:02:53.121 TEST_HEADER include/spdk/assert.h 00:02:53.121 TEST_HEADER include/spdk/barrier.h 00:02:53.121 TEST_HEADER include/spdk/base64.h 00:02:53.121 TEST_HEADER include/spdk/bdev.h 00:02:53.121 TEST_HEADER include/spdk/bdev_module.h 00:02:53.121 TEST_HEADER include/spdk/bdev_zone.h 00:02:53.121 TEST_HEADER include/spdk/bit_array.h 00:02:53.121 TEST_HEADER include/spdk/bit_pool.h 00:02:53.121 TEST_HEADER include/spdk/blob_bdev.h 00:02:53.121 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:53.121 TEST_HEADER include/spdk/blobfs.h 00:02:53.121 TEST_HEADER include/spdk/blob.h 00:02:53.121 TEST_HEADER include/spdk/conf.h 00:02:53.121 TEST_HEADER include/spdk/config.h 00:02:53.121 TEST_HEADER include/spdk/cpuset.h 00:02:53.121 TEST_HEADER include/spdk/crc16.h 00:02:53.121 TEST_HEADER include/spdk/crc32.h 00:02:53.121 TEST_HEADER include/spdk/crc64.h 00:02:53.121 TEST_HEADER include/spdk/dif.h 00:02:53.121 TEST_HEADER include/spdk/dma.h 00:02:53.121 TEST_HEADER include/spdk/endian.h 00:02:53.122 TEST_HEADER include/spdk/env_dpdk.h 00:02:53.122 TEST_HEADER include/spdk/env.h 00:02:53.122 TEST_HEADER include/spdk/event.h 00:02:53.122 TEST_HEADER include/spdk/fd_group.h 00:02:53.122 TEST_HEADER include/spdk/fd.h 00:02:53.122 TEST_HEADER include/spdk/file.h 00:02:53.122 TEST_HEADER include/spdk/ftl.h 00:02:53.122 TEST_HEADER include/spdk/gpt_spec.h 00:02:53.122 TEST_HEADER include/spdk/hexlify.h 00:02:53.122 TEST_HEADER include/spdk/histogram_data.h 00:02:53.122 TEST_HEADER include/spdk/idxd.h 00:02:53.122 TEST_HEADER include/spdk/idxd_spec.h 00:02:53.122 TEST_HEADER include/spdk/init.h 00:02:53.122 TEST_HEADER include/spdk/ioat.h 00:02:53.122 LINK spdk_top 00:02:53.122 TEST_HEADER include/spdk/ioat_spec.h 00:02:53.122 TEST_HEADER include/spdk/iscsi_spec.h 00:02:53.122 TEST_HEADER include/spdk/json.h 00:02:53.122 TEST_HEADER include/spdk/jsonrpc.h 00:02:53.122 TEST_HEADER include/spdk/keyring.h 00:02:53.122 TEST_HEADER include/spdk/keyring_module.h 00:02:53.122 TEST_HEADER include/spdk/likely.h 00:02:53.122 TEST_HEADER include/spdk/log.h 00:02:53.122 TEST_HEADER include/spdk/lvol.h 00:02:53.122 LINK thread 00:02:53.122 TEST_HEADER include/spdk/memory.h 00:02:53.122 TEST_HEADER include/spdk/mmio.h 00:02:53.122 TEST_HEADER include/spdk/nbd.h 00:02:53.122 TEST_HEADER include/spdk/net.h 00:02:53.122 TEST_HEADER include/spdk/notify.h 00:02:53.122 TEST_HEADER include/spdk/nvme.h 00:02:53.122 TEST_HEADER include/spdk/nvme_intel.h 00:02:53.122 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:53.122 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:53.122 TEST_HEADER include/spdk/nvme_spec.h 00:02:53.122 TEST_HEADER include/spdk/nvme_zns.h 00:02:53.122 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:53.122 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:53.122 TEST_HEADER include/spdk/nvmf.h 00:02:53.122 TEST_HEADER include/spdk/nvmf_spec.h 00:02:53.122 LINK vhost_fuzz 00:02:53.122 TEST_HEADER include/spdk/nvmf_transport.h 00:02:53.122 TEST_HEADER include/spdk/opal.h 00:02:53.122 TEST_HEADER include/spdk/opal_spec.h 00:02:53.122 TEST_HEADER include/spdk/pci_ids.h 00:02:53.122 TEST_HEADER include/spdk/pipe.h 00:02:53.122 TEST_HEADER include/spdk/queue.h 00:02:53.122 TEST_HEADER include/spdk/reduce.h 00:02:53.122 TEST_HEADER include/spdk/rpc.h 00:02:53.122 TEST_HEADER include/spdk/scheduler.h 00:02:53.122 TEST_HEADER include/spdk/scsi.h 00:02:53.122 TEST_HEADER include/spdk/scsi_spec.h 00:02:53.122 TEST_HEADER include/spdk/sock.h 00:02:53.122 TEST_HEADER include/spdk/stdinc.h 00:02:53.122 CC app/fio/nvme/fio_plugin.o 00:02:53.122 TEST_HEADER include/spdk/string.h 00:02:53.122 TEST_HEADER include/spdk/thread.h 00:02:53.122 TEST_HEADER include/spdk/trace.h 00:02:53.122 TEST_HEADER include/spdk/trace_parser.h 00:02:53.122 TEST_HEADER include/spdk/tree.h 00:02:53.122 TEST_HEADER include/spdk/ublk.h 00:02:53.122 TEST_HEADER include/spdk/util.h 00:02:53.122 TEST_HEADER include/spdk/uuid.h 00:02:53.122 TEST_HEADER include/spdk/version.h 00:02:53.122 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:53.122 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:53.122 TEST_HEADER include/spdk/vhost.h 00:02:53.122 TEST_HEADER include/spdk/vmd.h 00:02:53.122 CC app/vhost/vhost.o 00:02:53.122 TEST_HEADER include/spdk/xor.h 00:02:53.122 TEST_HEADER include/spdk/zipf.h 00:02:53.122 CXX test/cpp_headers/accel.o 00:02:53.381 CC test/env/mem_callbacks/mem_callbacks.o 00:02:53.381 CC app/fio/bdev/fio_plugin.o 00:02:53.381 CXX test/cpp_headers/accel_module.o 00:02:53.381 CC examples/sock/hello_world/hello_sock.o 00:02:53.381 CXX test/cpp_headers/assert.o 00:02:53.381 LINK vhost 00:02:53.381 CC examples/vmd/lsvmd/lsvmd.o 00:02:53.639 CXX test/cpp_headers/barrier.o 00:02:53.639 CC examples/idxd/perf/perf.o 00:02:53.639 CC test/app/jsoncat/jsoncat.o 00:02:53.639 LINK lsvmd 00:02:53.639 LINK hello_sock 00:02:53.639 CXX test/cpp_headers/base64.o 00:02:53.639 CC test/app/stub/stub.o 00:02:53.639 LINK jsoncat 00:02:53.639 LINK spdk_nvme 00:02:53.897 CXX test/cpp_headers/bdev.o 00:02:53.897 LINK mem_callbacks 00:02:53.897 CC examples/vmd/led/led.o 00:02:53.897 LINK spdk_bdev 00:02:53.897 LINK stub 00:02:53.897 LINK idxd_perf 00:02:53.897 CC test/env/vtophys/vtophys.o 00:02:53.897 CXX test/cpp_headers/bdev_module.o 00:02:53.897 CC test/env/memory/memory_ut.o 00:02:53.897 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:54.154 LINK led 00:02:54.154 CXX test/cpp_headers/bdev_zone.o 00:02:54.154 CXX test/cpp_headers/bit_array.o 00:02:54.154 LINK vtophys 00:02:54.154 CC test/env/pci/pci_ut.o 00:02:54.154 LINK env_dpdk_post_init 00:02:54.154 LINK iscsi_fuzz 00:02:54.412 CXX test/cpp_headers/bit_pool.o 00:02:54.412 CC examples/accel/perf/accel_perf.o 00:02:54.412 CC test/event/event_perf/event_perf.o 00:02:54.412 CC test/event/reactor/reactor.o 00:02:54.412 CXX test/cpp_headers/blob_bdev.o 00:02:54.412 CC test/event/reactor_perf/reactor_perf.o 00:02:54.412 CC examples/blob/hello_world/hello_blob.o 00:02:54.412 CC examples/nvme/hello_world/hello_world.o 00:02:54.675 LINK pci_ut 00:02:54.675 LINK reactor 00:02:54.675 LINK reactor_perf 00:02:54.675 LINK event_perf 00:02:54.675 CC examples/blob/cli/blobcli.o 00:02:54.675 CXX test/cpp_headers/blobfs_bdev.o 00:02:54.675 LINK hello_blob 00:02:54.675 LINK hello_world 00:02:54.675 LINK accel_perf 00:02:54.933 CXX test/cpp_headers/blobfs.o 00:02:54.933 CC examples/nvme/reconnect/reconnect.o 00:02:54.933 CC test/event/app_repeat/app_repeat.o 00:02:54.933 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:54.933 CXX test/cpp_headers/blob.o 00:02:54.933 CC test/event/scheduler/scheduler.o 00:02:54.933 CC examples/nvme/arbitration/arbitration.o 00:02:54.933 LINK app_repeat 00:02:55.191 LINK memory_ut 00:02:55.191 CXX test/cpp_headers/conf.o 00:02:55.191 LINK blobcli 00:02:55.191 CC examples/nvme/hotplug/hotplug.o 00:02:55.191 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:55.191 LINK scheduler 00:02:55.191 LINK reconnect 00:02:55.191 CXX test/cpp_headers/config.o 00:02:55.191 CXX test/cpp_headers/cpuset.o 00:02:55.449 LINK cmb_copy 00:02:55.449 LINK hotplug 00:02:55.449 LINK arbitration 00:02:55.449 CC test/rpc_client/rpc_client_test.o 00:02:55.449 CXX test/cpp_headers/crc16.o 00:02:55.449 CC examples/bdev/hello_world/hello_bdev.o 00:02:55.449 CXX test/cpp_headers/crc32.o 00:02:55.449 CXX test/cpp_headers/crc64.o 00:02:55.449 CC test/nvme/aer/aer.o 00:02:55.708 CXX test/cpp_headers/dif.o 00:02:55.708 LINK nvme_manage 00:02:55.708 CXX test/cpp_headers/dma.o 00:02:55.708 CC test/accel/dif/dif.o 00:02:55.708 LINK rpc_client_test 00:02:55.708 LINK hello_bdev 00:02:55.708 CXX test/cpp_headers/endian.o 00:02:55.708 CC examples/bdev/bdevperf/bdevperf.o 00:02:55.708 LINK aer 00:02:55.966 CXX test/cpp_headers/env_dpdk.o 00:02:55.966 CC examples/nvme/abort/abort.o 00:02:55.966 CC test/blobfs/mkfs/mkfs.o 00:02:55.966 CC test/lvol/esnap/esnap.o 00:02:55.966 CC test/nvme/reset/reset.o 00:02:55.966 CXX test/cpp_headers/env.o 00:02:55.966 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:56.224 CC test/nvme/sgl/sgl.o 00:02:56.224 CC test/nvme/e2edp/nvme_dp.o 00:02:56.224 LINK dif 00:02:56.225 LINK mkfs 00:02:56.225 CXX test/cpp_headers/event.o 00:02:56.225 LINK abort 00:02:56.225 LINK pmr_persistence 00:02:56.225 LINK reset 00:02:56.482 LINK sgl 00:02:56.482 LINK nvme_dp 00:02:56.482 CXX test/cpp_headers/fd_group.o 00:02:56.482 CXX test/cpp_headers/fd.o 00:02:56.482 CXX test/cpp_headers/file.o 00:02:56.482 CXX test/cpp_headers/ftl.o 00:02:56.482 CXX test/cpp_headers/gpt_spec.o 00:02:56.482 LINK bdevperf 00:02:56.482 CC test/nvme/overhead/overhead.o 00:02:56.739 CC test/nvme/err_injection/err_injection.o 00:02:56.739 CXX test/cpp_headers/hexlify.o 00:02:56.739 CXX test/cpp_headers/histogram_data.o 00:02:56.739 CC test/bdev/bdevio/bdevio.o 00:02:56.739 CC test/nvme/startup/startup.o 00:02:56.739 CXX test/cpp_headers/idxd.o 00:02:56.739 CXX test/cpp_headers/idxd_spec.o 00:02:56.739 CC test/nvme/reserve/reserve.o 00:02:56.739 LINK err_injection 00:02:56.996 CC test/nvme/simple_copy/simple_copy.o 00:02:56.997 LINK overhead 00:02:56.997 LINK startup 00:02:56.997 LINK reserve 00:02:56.997 CXX test/cpp_headers/init.o 00:02:56.997 CC test/nvme/connect_stress/connect_stress.o 00:02:56.997 LINK bdevio 00:02:56.997 CC test/nvme/boot_partition/boot_partition.o 00:02:56.997 CXX test/cpp_headers/ioat.o 00:02:56.997 CC examples/nvmf/nvmf/nvmf.o 00:02:57.254 LINK simple_copy 00:02:57.254 CC test/nvme/compliance/nvme_compliance.o 00:02:57.254 CXX test/cpp_headers/ioat_spec.o 00:02:57.254 LINK boot_partition 00:02:57.254 LINK connect_stress 00:02:57.254 CC test/nvme/fused_ordering/fused_ordering.o 00:02:57.254 CXX test/cpp_headers/iscsi_spec.o 00:02:57.254 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:57.519 LINK nvmf 00:02:57.519 CC test/nvme/fdp/fdp.o 00:02:57.519 CXX test/cpp_headers/json.o 00:02:57.519 CXX test/cpp_headers/jsonrpc.o 00:02:57.519 CXX test/cpp_headers/keyring.o 00:02:57.519 LINK fused_ordering 00:02:57.519 LINK doorbell_aers 00:02:57.519 CC test/nvme/cuse/cuse.o 00:02:57.519 LINK nvme_compliance 00:02:57.519 CXX test/cpp_headers/keyring_module.o 00:02:57.777 CXX test/cpp_headers/likely.o 00:02:57.777 CXX test/cpp_headers/log.o 00:02:57.777 CXX test/cpp_headers/lvol.o 00:02:57.777 CXX test/cpp_headers/memory.o 00:02:57.777 CXX test/cpp_headers/mmio.o 00:02:57.777 LINK fdp 00:02:57.777 CXX test/cpp_headers/nbd.o 00:02:57.777 CXX test/cpp_headers/net.o 00:02:57.777 CXX test/cpp_headers/notify.o 00:02:57.777 CXX test/cpp_headers/nvme.o 00:02:57.777 CXX test/cpp_headers/nvme_intel.o 00:02:57.777 CXX test/cpp_headers/nvme_ocssd.o 00:02:58.035 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:58.035 CXX test/cpp_headers/nvme_spec.o 00:02:58.035 CXX test/cpp_headers/nvme_zns.o 00:02:58.035 CXX test/cpp_headers/nvmf_cmd.o 00:02:58.035 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:58.035 CXX test/cpp_headers/nvmf.o 00:02:58.035 CXX test/cpp_headers/nvmf_spec.o 00:02:58.035 CXX test/cpp_headers/nvmf_transport.o 00:02:58.035 CXX test/cpp_headers/opal.o 00:02:58.035 CXX test/cpp_headers/opal_spec.o 00:02:58.294 CXX test/cpp_headers/pci_ids.o 00:02:58.294 CXX test/cpp_headers/pipe.o 00:02:58.294 CXX test/cpp_headers/queue.o 00:02:58.294 CXX test/cpp_headers/reduce.o 00:02:58.294 CXX test/cpp_headers/rpc.o 00:02:58.294 CXX test/cpp_headers/scheduler.o 00:02:58.294 CXX test/cpp_headers/scsi.o 00:02:58.294 CXX test/cpp_headers/scsi_spec.o 00:02:58.294 CXX test/cpp_headers/sock.o 00:02:58.294 CXX test/cpp_headers/stdinc.o 00:02:58.552 CXX test/cpp_headers/string.o 00:02:58.552 CXX test/cpp_headers/thread.o 00:02:58.552 CXX test/cpp_headers/trace.o 00:02:58.552 CXX test/cpp_headers/trace_parser.o 00:02:58.552 CXX test/cpp_headers/tree.o 00:02:58.552 CXX test/cpp_headers/ublk.o 00:02:58.552 CXX test/cpp_headers/util.o 00:02:58.552 CXX test/cpp_headers/uuid.o 00:02:58.552 CXX test/cpp_headers/version.o 00:02:58.552 CXX test/cpp_headers/vfio_user_pci.o 00:02:58.552 CXX test/cpp_headers/vfio_user_spec.o 00:02:58.552 CXX test/cpp_headers/vhost.o 00:02:58.808 CXX test/cpp_headers/vmd.o 00:02:58.808 CXX test/cpp_headers/xor.o 00:02:58.808 CXX test/cpp_headers/zipf.o 00:02:58.808 LINK cuse 00:03:01.334 LINK esnap 00:03:01.334 00:03:01.334 real 1m4.443s 00:03:01.334 user 5m55.925s 00:03:01.334 sys 1m50.826s 00:03:01.334 19:41:29 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:01.334 ************************************ 00:03:01.334 END TEST make 00:03:01.334 ************************************ 00:03:01.334 19:41:29 make -- common/autotest_common.sh@10 -- $ set +x 00:03:01.334 19:41:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:01.334 19:41:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:01.334 19:41:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:01.334 19:41:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.334 19:41:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:01.334 19:41:29 -- pm/common@44 -- $ pid=5208 00:03:01.334 19:41:29 -- pm/common@50 -- $ kill -TERM 5208 00:03:01.334 19:41:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.334 19:41:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:01.334 19:41:29 -- pm/common@44 -- $ pid=5210 00:03:01.334 19:41:29 -- pm/common@50 -- $ kill -TERM 5210 00:03:01.593 19:41:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:01.593 19:41:30 -- nvmf/common.sh@7 -- # uname -s 00:03:01.593 19:41:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:01.593 19:41:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:01.593 19:41:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:01.593 19:41:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:01.593 19:41:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:01.593 19:41:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:01.593 19:41:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:01.593 19:41:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:01.593 19:41:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:01.593 19:41:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:01.593 19:41:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0707769d-9dae-4359-8edf-9efcc4e972e8 00:03:01.593 19:41:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=0707769d-9dae-4359-8edf-9efcc4e972e8 00:03:01.593 19:41:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:01.593 19:41:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:01.593 19:41:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:01.593 19:41:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:01.593 19:41:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:01.593 19:41:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:01.593 19:41:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:01.593 19:41:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:01.593 19:41:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.593 19:41:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.593 19:41:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.593 19:41:30 -- paths/export.sh@5 -- # export PATH 00:03:01.593 19:41:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.593 19:41:30 -- nvmf/common.sh@47 -- # : 0 00:03:01.593 19:41:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:01.593 19:41:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:01.593 19:41:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:01.593 19:41:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:01.593 19:41:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:01.593 19:41:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:01.593 19:41:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:01.593 19:41:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:01.593 19:41:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:01.593 19:41:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:01.593 19:41:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:01.593 19:41:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:01.593 19:41:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:01.593 19:41:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:01.593 19:41:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:01.593 19:41:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:01.593 19:41:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:01.593 19:41:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:01.593 19:41:30 -- spdk/autotest.sh@48 -- # udevadm_pid=52874 00:03:01.593 19:41:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:01.593 19:41:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:01.593 19:41:30 -- pm/common@17 -- # local monitor 00:03:01.593 19:41:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.593 19:41:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.593 19:41:30 -- pm/common@25 -- # sleep 1 00:03:01.593 19:41:30 -- pm/common@21 -- # date +%s 00:03:01.593 19:41:30 -- pm/common@21 -- # date +%s 00:03:01.593 19:41:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721850090 00:03:01.593 19:41:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721850090 00:03:01.593 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721850090_collect-cpu-load.pm.log 00:03:01.593 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721850090_collect-vmstat.pm.log 00:03:02.535 19:41:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:02.535 19:41:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:02.535 19:41:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:02.535 19:41:31 -- common/autotest_common.sh@10 -- # set +x 00:03:02.535 19:41:31 -- spdk/autotest.sh@59 -- # create_test_list 00:03:02.535 19:41:31 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:02.535 19:41:31 -- common/autotest_common.sh@10 -- # set +x 00:03:02.535 19:41:31 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:02.535 19:41:31 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:02.535 19:41:31 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:02.535 19:41:31 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:02.535 19:41:31 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:02.535 19:41:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:02.535 19:41:31 -- common/autotest_common.sh@1455 -- # uname 00:03:02.535 19:41:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:02.535 19:41:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:02.535 19:41:31 -- common/autotest_common.sh@1475 -- # uname 00:03:02.793 19:41:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:02.793 19:41:31 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:02.793 19:41:31 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:02.793 19:41:31 -- spdk/autotest.sh@72 -- # hash lcov 00:03:02.793 19:41:31 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:02.793 19:41:31 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:02.793 --rc lcov_branch_coverage=1 00:03:02.793 --rc lcov_function_coverage=1 00:03:02.793 --rc genhtml_branch_coverage=1 00:03:02.793 --rc genhtml_function_coverage=1 00:03:02.793 --rc genhtml_legend=1 00:03:02.793 --rc geninfo_all_blocks=1 00:03:02.793 ' 00:03:02.793 19:41:31 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:02.793 --rc lcov_branch_coverage=1 00:03:02.793 --rc lcov_function_coverage=1 00:03:02.793 --rc genhtml_branch_coverage=1 00:03:02.793 --rc genhtml_function_coverage=1 00:03:02.793 --rc genhtml_legend=1 00:03:02.793 --rc geninfo_all_blocks=1 00:03:02.793 ' 00:03:02.793 19:41:31 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:02.793 --rc lcov_branch_coverage=1 00:03:02.793 --rc lcov_function_coverage=1 00:03:02.793 --rc genhtml_branch_coverage=1 00:03:02.793 --rc genhtml_function_coverage=1 00:03:02.793 --rc genhtml_legend=1 00:03:02.793 --rc geninfo_all_blocks=1 00:03:02.793 --no-external' 00:03:02.793 19:41:31 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:02.793 --rc lcov_branch_coverage=1 00:03:02.793 --rc lcov_function_coverage=1 00:03:02.793 --rc genhtml_branch_coverage=1 00:03:02.793 --rc genhtml_function_coverage=1 00:03:02.793 --rc genhtml_legend=1 00:03:02.793 --rc geninfo_all_blocks=1 00:03:02.793 --no-external' 00:03:02.793 19:41:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:02.793 lcov: LCOV version 1.14 00:03:02.793 19:41:31 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:20.877 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:20.877 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:33.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:33.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:33.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:33.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:33.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:33.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:33.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:33.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:33.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:33.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:33.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:33.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:33.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:33.089 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:33.089 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:33.090 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:33.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:33.091 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:33.091 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:36.370 19:42:04 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:36.370 19:42:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:36.370 19:42:04 -- common/autotest_common.sh@10 -- # set +x 00:03:36.370 19:42:04 -- spdk/autotest.sh@91 -- # rm -f 00:03:36.370 19:42:04 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:36.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.628 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:36.628 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:36.628 19:42:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:36.628 19:42:05 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:36.628 19:42:05 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:36.628 19:42:05 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:36.628 19:42:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.628 19:42:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:36.628 19:42:05 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:36.628 19:42:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.628 19:42:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.628 19:42:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.628 19:42:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:36.628 19:42:05 -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:36.628 19:42:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:36.628 19:42:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.628 19:42:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.628 19:42:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:36.628 19:42:05 -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:36.628 19:42:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:36.628 19:42:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.628 19:42:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.628 19:42:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:36.628 19:42:05 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:36.628 19:42:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:36.628 19:42:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.628 19:42:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:36.628 19:42:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.628 19:42:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.628 19:42:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:36.628 19:42:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:36.628 19:42:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:36.628 No valid GPT data, bailing 00:03:36.628 19:42:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:36.628 19:42:05 -- scripts/common.sh@391 -- # pt= 00:03:36.628 19:42:05 -- scripts/common.sh@392 -- # return 1 00:03:36.628 19:42:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:36.628 1+0 records in 00:03:36.628 1+0 records out 00:03:36.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00388849 s, 270 MB/s 00:03:36.628 19:42:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.628 19:42:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.628 19:42:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n2 00:03:36.628 19:42:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n2 pt 00:03:36.628 19:42:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:03:36.628 No valid GPT data, bailing 00:03:36.628 19:42:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:36.628 19:42:05 -- scripts/common.sh@391 -- # pt= 00:03:36.628 19:42:05 -- scripts/common.sh@392 -- # return 1 00:03:36.628 19:42:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:03:36.628 1+0 records in 00:03:36.628 1+0 records out 00:03:36.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00411497 s, 255 MB/s 00:03:36.628 19:42:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.628 19:42:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.628 19:42:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n3 00:03:36.628 19:42:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n3 pt 00:03:36.628 19:42:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:03:36.887 No valid GPT data, bailing 00:03:36.887 19:42:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:36.887 19:42:05 -- scripts/common.sh@391 -- # pt= 00:03:36.887 19:42:05 -- scripts/common.sh@392 -- # return 1 00:03:36.887 19:42:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:03:36.887 1+0 records in 00:03:36.887 1+0 records out 00:03:36.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00323264 s, 324 MB/s 00:03:36.887 19:42:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.887 19:42:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.887 19:42:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:36.887 19:42:05 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:36.887 19:42:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:36.887 No valid GPT data, bailing 00:03:36.887 19:42:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:36.887 19:42:05 -- scripts/common.sh@391 -- # pt= 00:03:36.887 19:42:05 -- scripts/common.sh@392 -- # return 1 00:03:36.887 19:42:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:36.887 1+0 records in 00:03:36.887 1+0 records out 00:03:36.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434607 s, 241 MB/s 00:03:36.887 19:42:05 -- spdk/autotest.sh@118 -- # sync 00:03:36.887 19:42:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:36.887 19:42:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:36.887 19:42:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:38.786 19:42:07 -- spdk/autotest.sh@124 -- # uname -s 00:03:38.786 19:42:07 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:38.786 19:42:07 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:38.786 19:42:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.786 19:42:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.786 19:42:07 -- common/autotest_common.sh@10 -- # set +x 00:03:38.786 ************************************ 00:03:38.786 START TEST setup.sh 00:03:38.786 ************************************ 00:03:38.786 19:42:07 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:38.786 * Looking for test storage... 00:03:38.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:38.786 19:42:07 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:38.786 19:42:07 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:38.786 19:42:07 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:38.786 19:42:07 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.786 19:42:07 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.786 19:42:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.786 ************************************ 00:03:38.786 START TEST acl 00:03:38.786 ************************************ 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:38.786 * Looking for test storage... 00:03:38.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:38.786 19:42:07 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:38.786 19:42:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.786 19:42:07 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:38.786 19:42:07 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:38.786 19:42:07 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:38.786 19:42:07 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:38.786 19:42:07 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:38.786 19:42:07 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.786 19:42:07 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.352 19:42:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:39.352 19:42:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:39.352 19:42:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.352 19:42:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:39.352 19:42:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.352 19:42:07 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:39.917 19:42:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:39.917 19:42:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.917 19:42:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.917 Hugepages 00:03:39.917 node hugesize free / total 00:03:39.917 19:42:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:39.917 19:42:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.917 19:42:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.917 00:03:39.917 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:39.917 19:42:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:39.917 19:42:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.917 19:42:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:40.176 19:42:08 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:40.176 19:42:08 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.176 19:42:08 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.176 19:42:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.176 ************************************ 00:03:40.176 START TEST denied 00:03:40.176 ************************************ 00:03:40.176 19:42:08 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:40.176 19:42:08 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:40.176 19:42:08 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:40.176 19:42:08 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:40.176 19:42:08 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.176 19:42:08 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:41.108 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:41.108 19:42:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:41.108 19:42:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:41.108 19:42:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:41.108 19:42:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:41.108 19:42:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:41.108 19:42:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:41.108 19:42:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:41.108 19:42:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:41.108 19:42:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.108 19:42:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:41.366 00:03:41.366 real 0m1.312s 00:03:41.366 user 0m0.544s 00:03:41.366 sys 0m0.724s 00:03:41.366 19:42:10 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.366 19:42:10 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:41.366 ************************************ 00:03:41.366 END TEST denied 00:03:41.366 ************************************ 00:03:41.625 19:42:10 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:41.625 19:42:10 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.625 19:42:10 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.625 19:42:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:41.625 ************************************ 00:03:41.625 START TEST allowed 00:03:41.625 ************************************ 00:03:41.625 19:42:10 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:41.625 19:42:10 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:41.625 19:42:10 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:41.625 19:42:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.625 19:42:10 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:41.625 19:42:10 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:42.559 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.559 19:42:10 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:42.559 19:42:10 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:42.559 19:42:10 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:42.560 19:42:10 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:42.560 19:42:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:42.560 19:42:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:42.560 19:42:10 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:42.560 19:42:10 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:42.560 19:42:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.560 19:42:10 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:43.126 00:03:43.126 real 0m1.725s 00:03:43.126 user 0m0.641s 00:03:43.126 sys 0m1.097s 00:03:43.386 ************************************ 00:03:43.386 END TEST allowed 00:03:43.386 ************************************ 00:03:43.386 19:42:11 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.386 19:42:11 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:43.386 00:03:43.386 real 0m4.624s 00:03:43.386 user 0m2.006s 00:03:43.386 sys 0m2.594s 00:03:43.386 19:42:11 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.386 19:42:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:43.386 ************************************ 00:03:43.386 END TEST acl 00:03:43.386 ************************************ 00:03:43.386 19:42:11 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:43.386 19:42:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.386 19:42:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.386 19:42:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.386 ************************************ 00:03:43.386 START TEST hugepages 00:03:43.386 ************************************ 00:03:43.386 19:42:11 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:43.386 * Looking for test storage... 00:03:43.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.386 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6018536 kB' 'MemAvailable: 7402376 kB' 'Buffers: 2436 kB' 'Cached: 1597960 kB' 'SwapCached: 0 kB' 'Active: 437060 kB' 'Inactive: 1269080 kB' 'Active(anon): 116232 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 107572 kB' 'Mapped: 48608 kB' 'Shmem: 10488 kB' 'KReclaimable: 61744 kB' 'Slab: 135732 kB' 'SReclaimable: 61744 kB' 'SUnreclaim: 73988 kB' 'KernelStack: 6476 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 346412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.387 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:43.388 19:42:12 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:43.388 19:42:12 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.388 19:42:12 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.388 19:42:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.388 ************************************ 00:03:43.388 START TEST default_setup 00:03:43.388 ************************************ 00:03:43.388 19:42:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:43.388 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:43.388 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.388 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:43.388 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:43.388 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:43.388 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:43.388 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.388 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.388 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:43.388 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:43.389 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.389 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.389 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:43.389 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.389 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.389 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:43.389 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.389 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:43.389 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:43.389 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:43.389 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.389 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.326 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.326 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8099568 kB' 'MemAvailable: 9483204 kB' 'Buffers: 2436 kB' 'Cached: 1597952 kB' 'SwapCached: 0 kB' 'Active: 454156 kB' 'Inactive: 1269084 kB' 'Active(anon): 133328 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 124400 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135336 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74012 kB' 'KernelStack: 6368 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.326 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.327 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.328 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.328 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.328 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.328 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.328 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.328 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.328 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.590 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:44.590 19:42:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.590 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.590 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.590 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.590 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.590 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.590 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.590 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.590 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.590 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.590 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8099568 kB' 'MemAvailable: 9483204 kB' 'Buffers: 2436 kB' 'Cached: 1597952 kB' 'SwapCached: 0 kB' 'Active: 453816 kB' 'Inactive: 1269084 kB' 'Active(anon): 132988 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 124104 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135336 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74012 kB' 'KernelStack: 6400 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.591 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.592 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8099568 kB' 'MemAvailable: 9483216 kB' 'Buffers: 2436 kB' 'Cached: 1597952 kB' 'SwapCached: 0 kB' 'Active: 453760 kB' 'Inactive: 1269096 kB' 'Active(anon): 132932 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 124040 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135332 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74008 kB' 'KernelStack: 6384 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.593 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.594 nr_hugepages=1024 00:03:44.594 resv_hugepages=0 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.594 surplus_hugepages=0 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.594 anon_hugepages=0 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.594 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8099568 kB' 'MemAvailable: 9483216 kB' 'Buffers: 2436 kB' 'Cached: 1597952 kB' 'SwapCached: 0 kB' 'Active: 453760 kB' 'Inactive: 1269096 kB' 'Active(anon): 132932 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 124096 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135332 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74008 kB' 'KernelStack: 6384 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.595 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.596 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8099568 kB' 'MemUsed: 4142408 kB' 'SwapCached: 0 kB' 'Active: 453760 kB' 'Inactive: 1269096 kB' 'Active(anon): 132932 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1600388 kB' 'Mapped: 48700 kB' 'AnonPages: 124040 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61324 kB' 'Slab: 135328 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.597 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.598 node0=1024 expecting 1024 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.598 00:03:44.598 real 0m1.097s 00:03:44.598 user 0m0.464s 00:03:44.598 sys 0m0.574s 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.598 19:42:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:44.598 ************************************ 00:03:44.598 END TEST default_setup 00:03:44.598 ************************************ 00:03:44.598 19:42:13 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:44.598 19:42:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.598 19:42:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.598 19:42:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.598 ************************************ 00:03:44.598 START TEST per_node_1G_alloc 00:03:44.598 ************************************ 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.598 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.171 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.171 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.171 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9159316 kB' 'MemAvailable: 10542968 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453836 kB' 'Inactive: 1269100 kB' 'Active(anon): 133008 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 124192 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135336 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74012 kB' 'KernelStack: 6440 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.171 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.172 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9159316 kB' 'MemAvailable: 10542968 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 454024 kB' 'Inactive: 1269100 kB' 'Active(anon): 133196 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 124352 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135332 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74008 kB' 'KernelStack: 6392 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.173 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.174 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9159316 kB' 'MemAvailable: 10542968 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453616 kB' 'Inactive: 1269100 kB' 'Active(anon): 132788 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123892 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135332 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74008 kB' 'KernelStack: 6400 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.175 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.176 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.177 nr_hugepages=512 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:45.177 resv_hugepages=0 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.177 surplus_hugepages=0 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.177 anon_hugepages=0 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9159568 kB' 'MemAvailable: 10543220 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453884 kB' 'Inactive: 1269100 kB' 'Active(anon): 133056 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 124160 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135328 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74004 kB' 'KernelStack: 6400 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.177 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.178 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9159316 kB' 'MemUsed: 3082660 kB' 'SwapCached: 0 kB' 'Active: 453788 kB' 'Inactive: 1269100 kB' 'Active(anon): 132960 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1600392 kB' 'Mapped: 48700 kB' 'AnonPages: 124064 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61324 kB' 'Slab: 135328 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.179 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.180 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.181 node0=512 expecting 512 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:45.181 00:03:45.181 real 0m0.602s 00:03:45.181 user 0m0.292s 00:03:45.181 sys 0m0.340s 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.181 19:42:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.181 ************************************ 00:03:45.181 END TEST per_node_1G_alloc 00:03:45.181 ************************************ 00:03:45.181 19:42:13 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:45.181 19:42:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.181 19:42:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.181 19:42:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.439 ************************************ 00:03:45.439 START TEST even_2G_alloc 00:03:45.439 ************************************ 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.439 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.440 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.440 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:45.440 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:45.440 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:45.440 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.440 19:42:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.704 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.704 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8117852 kB' 'MemAvailable: 9501504 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 454456 kB' 'Inactive: 1269100 kB' 'Active(anon): 133628 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 124736 kB' 'Mapped: 49012 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135344 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74020 kB' 'KernelStack: 6512 kB' 'PageTables: 4740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.704 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8117604 kB' 'MemAvailable: 9501260 kB' 'Buffers: 2436 kB' 'Cached: 1597960 kB' 'SwapCached: 0 kB' 'Active: 453928 kB' 'Inactive: 1269104 kB' 'Active(anon): 133100 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 124432 kB' 'Mapped: 48904 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135340 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74016 kB' 'KernelStack: 6464 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.705 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.706 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8116848 kB' 'MemAvailable: 9500500 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453784 kB' 'Inactive: 1269100 kB' 'Active(anon): 132956 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 124044 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135352 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74028 kB' 'KernelStack: 6432 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.707 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.708 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.709 nr_hugepages=1024 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.709 resv_hugepages=0 00:03:45.709 surplus_hugepages=0 00:03:45.709 anon_hugepages=0 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.709 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8117136 kB' 'MemAvailable: 9500788 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453752 kB' 'Inactive: 1269100 kB' 'Active(anon): 132924 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 124272 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135352 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74028 kB' 'KernelStack: 6416 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.710 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8117136 kB' 'MemUsed: 4124840 kB' 'SwapCached: 0 kB' 'Active: 453824 kB' 'Inactive: 1269100 kB' 'Active(anon): 132996 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1600392 kB' 'Mapped: 48700 kB' 'AnonPages: 124148 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61324 kB' 'Slab: 135348 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.711 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.712 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.713 node0=1024 expecting 1024 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.713 00:03:45.713 real 0m0.498s 00:03:45.713 user 0m0.250s 00:03:45.713 sys 0m0.277s 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.713 19:42:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.713 ************************************ 00:03:45.713 END TEST even_2G_alloc 00:03:45.713 ************************************ 00:03:45.972 19:42:14 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:45.972 19:42:14 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.972 19:42:14 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.972 19:42:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.972 ************************************ 00:03:45.972 START TEST odd_alloc 00:03:45.972 ************************************ 00:03:45.972 19:42:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:45.972 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:45.972 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.973 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.236 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.236 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8115964 kB' 'MemAvailable: 9499616 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453584 kB' 'Inactive: 1269100 kB' 'Active(anon): 132756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123852 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135424 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74100 kB' 'KernelStack: 6356 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.236 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8115964 kB' 'MemAvailable: 9499616 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453572 kB' 'Inactive: 1269100 kB' 'Active(anon): 132744 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 124168 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135436 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74112 kB' 'KernelStack: 6400 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.237 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.238 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8115964 kB' 'MemAvailable: 9499616 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453552 kB' 'Inactive: 1269100 kB' 'Active(anon): 132724 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123856 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135436 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74112 kB' 'KernelStack: 6384 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.239 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.240 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.241 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:46.241 nr_hugepages=1025 00:03:46.242 resv_hugepages=0 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.242 surplus_hugepages=0 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.242 anon_hugepages=0 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8115964 kB' 'MemAvailable: 9499616 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453816 kB' 'Inactive: 1269100 kB' 'Active(anon): 132988 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 124168 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135436 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74112 kB' 'KernelStack: 6400 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.242 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:46.243 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8115712 kB' 'MemUsed: 4126264 kB' 'SwapCached: 0 kB' 'Active: 453832 kB' 'Inactive: 1269100 kB' 'Active(anon): 133004 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1600392 kB' 'Mapped: 48700 kB' 'AnonPages: 124172 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61324 kB' 'Slab: 135428 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.504 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:46.505 node0=1025 expecting 1025 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:46.505 00:03:46.505 real 0m0.546s 00:03:46.505 user 0m0.273s 00:03:46.505 sys 0m0.285s 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.505 19:42:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.505 ************************************ 00:03:46.505 END TEST odd_alloc 00:03:46.505 ************************************ 00:03:46.505 19:42:14 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:46.505 19:42:14 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.505 19:42:14 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.505 19:42:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.505 ************************************ 00:03:46.505 START TEST custom_alloc 00:03:46.505 ************************************ 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.505 19:42:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.767 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.767 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.767 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9166864 kB' 'MemAvailable: 10550516 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 454128 kB' 'Inactive: 1269100 kB' 'Active(anon): 133300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 124396 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135432 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74108 kB' 'KernelStack: 6392 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.767 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.768 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9166864 kB' 'MemAvailable: 10550516 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453904 kB' 'Inactive: 1269100 kB' 'Active(anon): 133076 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 124152 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135436 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74112 kB' 'KernelStack: 6400 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.769 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.770 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.771 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.032 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.032 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.032 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.032 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9166864 kB' 'MemAvailable: 10550516 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453632 kB' 'Inactive: 1269100 kB' 'Active(anon): 132804 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123928 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135428 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74104 kB' 'KernelStack: 6400 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.033 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.034 nr_hugepages=512 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:47.034 resv_hugepages=0 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.034 surplus_hugepages=0 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.034 anon_hugepages=0 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9166864 kB' 'MemAvailable: 10550516 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453640 kB' 'Inactive: 1269100 kB' 'Active(anon): 132812 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123928 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135428 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74104 kB' 'KernelStack: 6400 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.034 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.035 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9166864 kB' 'MemUsed: 3075112 kB' 'SwapCached: 0 kB' 'Active: 453884 kB' 'Inactive: 1269100 kB' 'Active(anon): 133056 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1600392 kB' 'Mapped: 48704 kB' 'AnonPages: 124100 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61324 kB' 'Slab: 135428 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.036 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.037 node0=512 expecting 512 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:47.037 00:03:47.037 real 0m0.564s 00:03:47.037 user 0m0.299s 00:03:47.037 sys 0m0.300s 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.037 19:42:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.037 ************************************ 00:03:47.037 END TEST custom_alloc 00:03:47.037 ************************************ 00:03:47.037 19:42:15 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:47.037 19:42:15 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.037 19:42:15 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.037 19:42:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.037 ************************************ 00:03:47.037 START TEST no_shrink_alloc 00:03:47.037 ************************************ 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.037 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.295 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.295 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8118804 kB' 'MemAvailable: 9502456 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 454160 kB' 'Inactive: 1269100 kB' 'Active(anon): 133332 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 124484 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135440 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74116 kB' 'KernelStack: 6456 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.559 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.560 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8118804 kB' 'MemAvailable: 9502456 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453696 kB' 'Inactive: 1269100 kB' 'Active(anon): 132868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 124020 kB' 'Mapped: 48816 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135440 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74116 kB' 'KernelStack: 6392 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.561 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.562 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8118804 kB' 'MemAvailable: 9502456 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453896 kB' 'Inactive: 1269100 kB' 'Active(anon): 133068 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 124220 kB' 'Mapped: 48708 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135444 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74120 kB' 'KernelStack: 6400 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.563 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.564 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.564 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.564 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.564 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.565 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.566 nr_hugepages=1024 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.566 resv_hugepages=0 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.566 surplus_hugepages=0 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.566 anon_hugepages=0 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8118804 kB' 'MemAvailable: 9502456 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 453640 kB' 'Inactive: 1269100 kB' 'Active(anon): 132812 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 124180 kB' 'Mapped: 48708 kB' 'Shmem: 10464 kB' 'KReclaimable: 61324 kB' 'Slab: 135432 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74108 kB' 'KernelStack: 6384 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 363468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.566 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.567 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8118804 kB' 'MemUsed: 4123172 kB' 'SwapCached: 0 kB' 'Active: 453644 kB' 'Inactive: 1269100 kB' 'Active(anon): 132816 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1600392 kB' 'Mapped: 48708 kB' 'AnonPages: 124200 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61324 kB' 'Slab: 135428 kB' 'SReclaimable: 61324 kB' 'SUnreclaim: 74104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.568 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.569 node0=1024 expecting 1024 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.569 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.570 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:47.570 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:47.570 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:47.570 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.570 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.830 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.830 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.830 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8139012 kB' 'MemAvailable: 9522656 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 449388 kB' 'Inactive: 1269100 kB' 'Active(anon): 128560 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 119660 kB' 'Mapped: 48156 kB' 'Shmem: 10464 kB' 'KReclaimable: 61312 kB' 'Slab: 135128 kB' 'SReclaimable: 61312 kB' 'SUnreclaim: 73816 kB' 'KernelStack: 6312 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.830 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.831 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8138764 kB' 'MemAvailable: 9522408 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 448816 kB' 'Inactive: 1269100 kB' 'Active(anon): 127988 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 119108 kB' 'Mapped: 47968 kB' 'Shmem: 10464 kB' 'KReclaimable: 61312 kB' 'Slab: 134980 kB' 'SReclaimable: 61312 kB' 'SUnreclaim: 73668 kB' 'KernelStack: 6288 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.095 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.096 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8138764 kB' 'MemAvailable: 9522408 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 449076 kB' 'Inactive: 1269100 kB' 'Active(anon): 128248 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 119368 kB' 'Mapped: 47968 kB' 'Shmem: 10464 kB' 'KReclaimable: 61312 kB' 'Slab: 134980 kB' 'SReclaimable: 61312 kB' 'SUnreclaim: 73668 kB' 'KernelStack: 6288 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.097 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.098 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.099 nr_hugepages=1024 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.099 resv_hugepages=0 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.099 surplus_hugepages=0 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.099 anon_hugepages=0 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8138764 kB' 'MemAvailable: 9522408 kB' 'Buffers: 2436 kB' 'Cached: 1597956 kB' 'SwapCached: 0 kB' 'Active: 449036 kB' 'Inactive: 1269100 kB' 'Active(anon): 128208 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 119364 kB' 'Mapped: 47968 kB' 'Shmem: 10464 kB' 'KReclaimable: 61312 kB' 'Slab: 134980 kB' 'SReclaimable: 61312 kB' 'SUnreclaim: 73668 kB' 'KernelStack: 6288 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.099 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.100 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.100 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.100 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.100 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.100 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.100 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.100 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.101 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8138764 kB' 'MemUsed: 4103212 kB' 'SwapCached: 0 kB' 'Active: 448724 kB' 'Inactive: 1269100 kB' 'Active(anon): 127896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1269100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1600392 kB' 'Mapped: 47968 kB' 'AnonPages: 119260 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61312 kB' 'Slab: 134980 kB' 'SReclaimable: 61312 kB' 'SUnreclaim: 73668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.102 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.103 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.104 node0=1024 expecting 1024 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.104 00:03:48.104 real 0m1.044s 00:03:48.104 user 0m0.541s 00:03:48.104 sys 0m0.579s 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.104 19:42:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.104 ************************************ 00:03:48.104 END TEST no_shrink_alloc 00:03:48.104 ************************************ 00:03:48.104 19:42:16 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:48.104 19:42:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:48.104 19:42:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.104 19:42:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.104 19:42:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.104 19:42:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.104 19:42:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.104 19:42:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:48.104 19:42:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:48.104 00:03:48.104 real 0m4.794s 00:03:48.104 user 0m2.292s 00:03:48.104 sys 0m2.622s 00:03:48.104 19:42:16 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.104 19:42:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.104 ************************************ 00:03:48.104 END TEST hugepages 00:03:48.104 ************************************ 00:03:48.104 19:42:16 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:48.104 19:42:16 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.104 19:42:16 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.104 19:42:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.104 ************************************ 00:03:48.104 START TEST driver 00:03:48.104 ************************************ 00:03:48.104 19:42:16 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:48.362 * Looking for test storage... 00:03:48.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:48.362 19:42:16 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:48.362 19:42:16 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.362 19:42:16 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:48.930 19:42:17 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:48.930 19:42:17 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.930 19:42:17 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.930 19:42:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.930 ************************************ 00:03:48.930 START TEST guess_driver 00:03:48.930 ************************************ 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:48.930 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:48.930 Looking for driver=uio_pci_generic 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.930 19:42:17 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.866 19:42:18 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.458 00:03:50.458 real 0m1.561s 00:03:50.458 user 0m0.600s 00:03:50.458 sys 0m0.990s 00:03:50.458 19:42:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.458 ************************************ 00:03:50.458 END TEST guess_driver 00:03:50.458 19:42:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:50.458 ************************************ 00:03:50.458 00:03:50.458 real 0m2.347s 00:03:50.458 user 0m0.844s 00:03:50.458 sys 0m1.605s 00:03:50.458 19:42:19 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.458 19:42:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:50.458 ************************************ 00:03:50.458 END TEST driver 00:03:50.458 ************************************ 00:03:50.458 19:42:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:50.458 19:42:19 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.458 19:42:19 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.458 19:42:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.458 ************************************ 00:03:50.458 START TEST devices 00:03:50.458 ************************************ 00:03:50.458 19:42:19 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:50.717 * Looking for test storage... 00:03:50.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:50.717 19:42:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:50.717 19:42:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:50.717 19:42:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.717 19:42:19 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:51.652 19:42:20 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:51.652 19:42:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:51.652 19:42:20 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:51.652 No valid GPT data, bailing 00:03:51.652 19:42:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.652 19:42:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:51.652 19:42:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:51.652 19:42:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:51.652 19:42:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:51.652 19:42:20 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:51.652 19:42:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:51.652 19:42:20 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:51.652 No valid GPT data, bailing 00:03:51.652 19:42:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:51.652 19:42:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:51.652 19:42:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:51.652 19:42:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:51.652 19:42:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:51.652 19:42:20 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:51.652 19:42:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:51.653 19:42:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:51.653 19:42:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:51.653 19:42:20 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:51.653 No valid GPT data, bailing 00:03:51.653 19:42:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:51.653 19:42:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:51.653 19:42:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:51.653 19:42:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:51.653 19:42:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:51.653 19:42:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:51.653 19:42:20 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:51.653 19:42:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:51.653 19:42:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.653 19:42:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:51.653 19:42:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.653 19:42:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:51.653 19:42:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:51.653 19:42:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:51.653 19:42:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:51.653 19:42:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:51.653 19:42:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:51.653 19:42:20 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:51.653 No valid GPT data, bailing 00:03:51.653 19:42:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:51.653 19:42:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:51.653 19:42:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:51.911 19:42:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:51.911 19:42:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:51.911 19:42:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:51.911 19:42:20 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:51.911 19:42:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:51.911 19:42:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.911 19:42:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:51.911 19:42:20 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:51.911 19:42:20 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:51.911 19:42:20 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:51.911 19:42:20 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.911 19:42:20 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.911 19:42:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:51.911 ************************************ 00:03:51.911 START TEST nvme_mount 00:03:51.911 ************************************ 00:03:51.911 19:42:20 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:51.911 19:42:20 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:51.911 19:42:20 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:51.911 19:42:20 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:51.911 19:42:20 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:51.912 19:42:20 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:52.847 Creating new GPT entries in memory. 00:03:52.847 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:52.847 other utilities. 00:03:52.847 19:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:52.847 19:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.847 19:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:52.847 19:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:52.847 19:42:21 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:53.782 Creating new GPT entries in memory. 00:03:53.782 The operation has completed successfully. 00:03:53.782 19:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:53.782 19:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.782 19:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57124 00:03:53.782 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.782 19:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:53.782 19:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.782 19:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:53.782 19:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:53.782 19:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.040 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.298 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.298 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.298 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.298 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.556 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.556 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:54.556 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.556 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.556 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.556 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:54.556 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.556 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.556 19:42:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.556 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:54.556 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.556 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.556 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.863 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:54.863 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:54.863 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:54.863 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.863 19:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:55.169 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.425 19:42:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.683 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.683 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:55.683 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:55.683 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.683 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.683 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.683 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.683 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.941 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.941 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.941 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.941 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:55.941 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:55.941 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:55.941 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.941 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.941 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.941 19:42:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:55.941 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.941 00:03:55.941 real 0m4.128s 00:03:55.941 user 0m0.736s 00:03:55.941 sys 0m1.123s 00:03:55.941 19:42:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.942 19:42:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:55.942 ************************************ 00:03:55.942 END TEST nvme_mount 00:03:55.942 ************************************ 00:03:55.942 19:42:24 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:55.942 19:42:24 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.942 19:42:24 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.942 19:42:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:55.942 ************************************ 00:03:55.942 START TEST dm_mount 00:03:55.942 ************************************ 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:55.942 19:42:24 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:56.876 Creating new GPT entries in memory. 00:03:56.876 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:56.876 other utilities. 00:03:56.876 19:42:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:56.876 19:42:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.876 19:42:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:56.876 19:42:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:56.876 19:42:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:58.274 Creating new GPT entries in memory. 00:03:58.274 The operation has completed successfully. 00:03:58.274 19:42:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.274 19:42:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.274 19:42:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.274 19:42:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.274 19:42:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:59.207 The operation has completed successfully. 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57560 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.207 19:42:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:59.465 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.465 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:59.465 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:59.465 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.465 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.465 19:42:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.465 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.465 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.465 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.465 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.724 19:42:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:59.983 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:00.241 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:00.241 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:00.241 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:00.241 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:00.241 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:00.241 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:00.241 19:42:28 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:00.241 00:04:00.241 real 0m4.187s 00:04:00.241 user 0m0.449s 00:04:00.241 sys 0m0.649s 00:04:00.241 19:42:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.241 ************************************ 00:04:00.241 END TEST dm_mount 00:04:00.241 ************************************ 00:04:00.241 19:42:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:00.241 19:42:28 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:00.241 19:42:28 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:00.241 19:42:28 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:00.241 19:42:28 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:00.241 19:42:28 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:00.241 19:42:28 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:00.241 19:42:28 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:00.498 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:00.498 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:00.498 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:00.498 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:00.498 19:42:29 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:00.498 19:42:29 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:00.498 19:42:29 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:00.498 19:42:29 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:00.498 19:42:29 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:00.498 19:42:29 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:00.498 19:42:29 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:00.498 00:04:00.498 real 0m9.920s 00:04:00.498 user 0m1.844s 00:04:00.498 sys 0m2.435s 00:04:00.498 19:42:29 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.498 19:42:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:00.498 ************************************ 00:04:00.498 END TEST devices 00:04:00.498 ************************************ 00:04:00.498 00:04:00.498 real 0m21.954s 00:04:00.498 user 0m7.071s 00:04:00.498 sys 0m9.442s 00:04:00.498 19:42:29 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.498 19:42:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:00.498 ************************************ 00:04:00.498 END TEST setup.sh 00:04:00.498 ************************************ 00:04:00.498 19:42:29 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:01.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.063 Hugepages 00:04:01.063 node hugesize free / total 00:04:01.063 node0 1048576kB 0 / 0 00:04:01.063 node0 2048kB 2048 / 2048 00:04:01.063 00:04:01.063 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.321 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:01.321 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:01.321 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:01.321 19:42:29 -- spdk/autotest.sh@130 -- # uname -s 00:04:01.321 19:42:29 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:01.321 19:42:29 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:01.321 19:42:29 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.921 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.178 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.178 19:42:30 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:03.111 19:42:31 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:03.111 19:42:31 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:03.111 19:42:31 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:03.111 19:42:31 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:03.111 19:42:31 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:03.111 19:42:31 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:03.111 19:42:31 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.111 19:42:31 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:03.111 19:42:31 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.111 19:42:31 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:03.111 19:42:31 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:03.111 19:42:31 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.676 Waiting for block devices as requested 00:04:03.676 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.676 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.933 19:42:32 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:03.933 19:42:32 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:03.933 19:42:32 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:03.933 19:42:32 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:03.933 19:42:32 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:03.933 19:42:32 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:03.933 19:42:32 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:03.933 19:42:32 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:03.933 19:42:32 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:03.933 19:42:32 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:03.933 19:42:32 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:03.933 19:42:32 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:03.933 19:42:32 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:03.933 19:42:32 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:03.933 19:42:32 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:03.933 19:42:32 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:03.934 19:42:32 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:03.934 19:42:32 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:03.934 19:42:32 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:03.934 19:42:32 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:03.934 19:42:32 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:03.934 19:42:32 -- common/autotest_common.sh@1557 -- # continue 00:04:03.934 19:42:32 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:03.934 19:42:32 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:03.934 19:42:32 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:03.934 19:42:32 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:03.934 19:42:32 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:03.934 19:42:32 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:03.934 19:42:32 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:03.934 19:42:32 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:03.934 19:42:32 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:03.934 19:42:32 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:03.934 19:42:32 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:03.934 19:42:32 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:03.934 19:42:32 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:03.934 19:42:32 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:03.934 19:42:32 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:03.934 19:42:32 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:03.934 19:42:32 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:03.934 19:42:32 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:03.934 19:42:32 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:03.934 19:42:32 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:03.934 19:42:32 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:03.934 19:42:32 -- common/autotest_common.sh@1557 -- # continue 00:04:03.934 19:42:32 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:03.934 19:42:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.934 19:42:32 -- common/autotest_common.sh@10 -- # set +x 00:04:03.934 19:42:32 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:03.934 19:42:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.934 19:42:32 -- common/autotest_common.sh@10 -- # set +x 00:04:03.934 19:42:32 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.766 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.766 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.766 19:42:33 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:04.766 19:42:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:04.766 19:42:33 -- common/autotest_common.sh@10 -- # set +x 00:04:04.766 19:42:33 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:04.766 19:42:33 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:04.766 19:42:33 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:04.766 19:42:33 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:04.766 19:42:33 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:04.766 19:42:33 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:04.766 19:42:33 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:04.766 19:42:33 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:04.766 19:42:33 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.766 19:42:33 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.766 19:42:33 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:05.030 19:42:33 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:05.030 19:42:33 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:05.030 19:42:33 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:05.030 19:42:33 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:05.030 19:42:33 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:05.030 19:42:33 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.030 19:42:33 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:05.030 19:42:33 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:05.030 19:42:33 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:05.030 19:42:33 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.030 19:42:33 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:05.030 19:42:33 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:05.030 19:42:33 -- common/autotest_common.sh@1593 -- # return 0 00:04:05.030 19:42:33 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:05.030 19:42:33 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:05.030 19:42:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:05.030 19:42:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:05.030 19:42:33 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:05.030 19:42:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.030 19:42:33 -- common/autotest_common.sh@10 -- # set +x 00:04:05.030 19:42:33 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:05.030 19:42:33 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:05.030 19:42:33 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:05.030 19:42:33 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:05.030 19:42:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.030 19:42:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.030 19:42:33 -- common/autotest_common.sh@10 -- # set +x 00:04:05.030 ************************************ 00:04:05.030 START TEST env 00:04:05.030 ************************************ 00:04:05.030 19:42:33 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:05.030 * Looking for test storage... 00:04:05.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:05.030 19:42:33 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:05.030 19:42:33 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.030 19:42:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.030 19:42:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.030 ************************************ 00:04:05.030 START TEST env_memory 00:04:05.030 ************************************ 00:04:05.030 19:42:33 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:05.030 00:04:05.030 00:04:05.030 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.030 http://cunit.sourceforge.net/ 00:04:05.030 00:04:05.030 00:04:05.030 Suite: memory 00:04:05.030 Test: alloc and free memory map ...[2024-07-24 19:42:33.645696] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:05.030 passed 00:04:05.030 Test: mem map translation ...[2024-07-24 19:42:33.679362] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:05.030 [2024-07-24 19:42:33.679434] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:05.030 [2024-07-24 19:42:33.679497] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:05.030 [2024-07-24 19:42:33.679511] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:05.289 passed 00:04:05.289 Test: mem map registration ...[2024-07-24 19:42:33.744940] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:05.289 [2024-07-24 19:42:33.745025] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:05.289 passed 00:04:05.289 Test: mem map adjacent registrations ...passed 00:04:05.289 00:04:05.289 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.289 suites 1 1 n/a 0 0 00:04:05.289 tests 4 4 4 0 0 00:04:05.289 asserts 152 152 152 0 n/a 00:04:05.289 00:04:05.289 Elapsed time = 0.219 seconds 00:04:05.289 00:04:05.289 real 0m0.237s 00:04:05.289 user 0m0.219s 00:04:05.289 sys 0m0.013s 00:04:05.289 19:42:33 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.289 19:42:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:05.289 ************************************ 00:04:05.289 END TEST env_memory 00:04:05.289 ************************************ 00:04:05.289 19:42:33 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:05.289 19:42:33 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.289 19:42:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.289 19:42:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.289 ************************************ 00:04:05.289 START TEST env_vtophys 00:04:05.289 ************************************ 00:04:05.289 19:42:33 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:05.289 EAL: lib.eal log level changed from notice to debug 00:04:05.289 EAL: Detected lcore 0 as core 0 on socket 0 00:04:05.290 EAL: Detected lcore 1 as core 0 on socket 0 00:04:05.290 EAL: Detected lcore 2 as core 0 on socket 0 00:04:05.290 EAL: Detected lcore 3 as core 0 on socket 0 00:04:05.290 EAL: Detected lcore 4 as core 0 on socket 0 00:04:05.290 EAL: Detected lcore 5 as core 0 on socket 0 00:04:05.290 EAL: Detected lcore 6 as core 0 on socket 0 00:04:05.290 EAL: Detected lcore 7 as core 0 on socket 0 00:04:05.290 EAL: Detected lcore 8 as core 0 on socket 0 00:04:05.290 EAL: Detected lcore 9 as core 0 on socket 0 00:04:05.290 EAL: Maximum logical cores by configuration: 128 00:04:05.290 EAL: Detected CPU lcores: 10 00:04:05.290 EAL: Detected NUMA nodes: 1 00:04:05.290 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:05.290 EAL: Detected shared linkage of DPDK 00:04:05.290 EAL: No shared files mode enabled, IPC will be disabled 00:04:05.290 EAL: Selected IOVA mode 'PA' 00:04:05.290 EAL: Probing VFIO support... 00:04:05.290 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:05.290 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:05.290 EAL: Ask a virtual area of 0x2e000 bytes 00:04:05.290 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:05.290 EAL: Setting up physically contiguous memory... 00:04:05.290 EAL: Setting maximum number of open files to 524288 00:04:05.290 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:05.290 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:05.290 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.290 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:05.290 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.290 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.290 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:05.290 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:05.290 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.290 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:05.290 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.290 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.290 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:05.290 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:05.290 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.290 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:05.290 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.290 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.290 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:05.290 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:05.290 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.290 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:05.290 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.290 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.290 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:05.290 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:05.290 EAL: Hugepages will be freed exactly as allocated. 00:04:05.290 EAL: No shared files mode enabled, IPC is disabled 00:04:05.290 EAL: No shared files mode enabled, IPC is disabled 00:04:05.548 EAL: TSC frequency is ~2100000 KHz 00:04:05.548 EAL: Main lcore 0 is ready (tid=7f8894011a00;cpuset=[0]) 00:04:05.548 EAL: Trying to obtain current memory policy. 00:04:05.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.548 EAL: Restoring previous memory policy: 0 00:04:05.548 EAL: request: mp_malloc_sync 00:04:05.548 EAL: No shared files mode enabled, IPC is disabled 00:04:05.548 EAL: Heap on socket 0 was expanded by 2MB 00:04:05.548 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:05.548 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:05.548 EAL: Mem event callback 'spdk:(nil)' registered 00:04:05.548 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:05.548 00:04:05.548 00:04:05.548 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.548 http://cunit.sourceforge.net/ 00:04:05.548 00:04:05.548 00:04:05.548 Suite: components_suite 00:04:05.548 Test: vtophys_malloc_test ...passed 00:04:05.548 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:05.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.548 EAL: Restoring previous memory policy: 4 00:04:05.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.548 EAL: request: mp_malloc_sync 00:04:05.548 EAL: No shared files mode enabled, IPC is disabled 00:04:05.548 EAL: Heap on socket 0 was expanded by 4MB 00:04:05.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.548 EAL: request: mp_malloc_sync 00:04:05.548 EAL: No shared files mode enabled, IPC is disabled 00:04:05.548 EAL: Heap on socket 0 was shrunk by 4MB 00:04:05.548 EAL: Trying to obtain current memory policy. 00:04:05.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.548 EAL: Restoring previous memory policy: 4 00:04:05.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.548 EAL: request: mp_malloc_sync 00:04:05.548 EAL: No shared files mode enabled, IPC is disabled 00:04:05.548 EAL: Heap on socket 0 was expanded by 6MB 00:04:05.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.549 EAL: request: mp_malloc_sync 00:04:05.549 EAL: No shared files mode enabled, IPC is disabled 00:04:05.549 EAL: Heap on socket 0 was shrunk by 6MB 00:04:05.549 EAL: Trying to obtain current memory policy. 00:04:05.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.549 EAL: Restoring previous memory policy: 4 00:04:05.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.549 EAL: request: mp_malloc_sync 00:04:05.549 EAL: No shared files mode enabled, IPC is disabled 00:04:05.549 EAL: Heap on socket 0 was expanded by 10MB 00:04:05.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.549 EAL: request: mp_malloc_sync 00:04:05.549 EAL: No shared files mode enabled, IPC is disabled 00:04:05.549 EAL: Heap on socket 0 was shrunk by 10MB 00:04:05.549 EAL: Trying to obtain current memory policy. 00:04:05.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.549 EAL: Restoring previous memory policy: 4 00:04:05.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.549 EAL: request: mp_malloc_sync 00:04:05.549 EAL: No shared files mode enabled, IPC is disabled 00:04:05.549 EAL: Heap on socket 0 was expanded by 18MB 00:04:05.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.549 EAL: request: mp_malloc_sync 00:04:05.549 EAL: No shared files mode enabled, IPC is disabled 00:04:05.549 EAL: Heap on socket 0 was shrunk by 18MB 00:04:05.549 EAL: Trying to obtain current memory policy. 00:04:05.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.549 EAL: Restoring previous memory policy: 4 00:04:05.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.549 EAL: request: mp_malloc_sync 00:04:05.549 EAL: No shared files mode enabled, IPC is disabled 00:04:05.549 EAL: Heap on socket 0 was expanded by 34MB 00:04:05.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.549 EAL: request: mp_malloc_sync 00:04:05.549 EAL: No shared files mode enabled, IPC is disabled 00:04:05.549 EAL: Heap on socket 0 was shrunk by 34MB 00:04:05.549 EAL: Trying to obtain current memory policy. 00:04:05.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.549 EAL: Restoring previous memory policy: 4 00:04:05.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.549 EAL: request: mp_malloc_sync 00:04:05.549 EAL: No shared files mode enabled, IPC is disabled 00:04:05.549 EAL: Heap on socket 0 was expanded by 66MB 00:04:05.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.549 EAL: request: mp_malloc_sync 00:04:05.549 EAL: No shared files mode enabled, IPC is disabled 00:04:05.549 EAL: Heap on socket 0 was shrunk by 66MB 00:04:05.549 EAL: Trying to obtain current memory policy. 00:04:05.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.549 EAL: Restoring previous memory policy: 4 00:04:05.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.549 EAL: request: mp_malloc_sync 00:04:05.549 EAL: No shared files mode enabled, IPC is disabled 00:04:05.549 EAL: Heap on socket 0 was expanded by 130MB 00:04:05.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.549 EAL: request: mp_malloc_sync 00:04:05.549 EAL: No shared files mode enabled, IPC is disabled 00:04:05.549 EAL: Heap on socket 0 was shrunk by 130MB 00:04:05.549 EAL: Trying to obtain current memory policy. 00:04:05.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.808 EAL: Restoring previous memory policy: 4 00:04:05.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.808 EAL: request: mp_malloc_sync 00:04:05.808 EAL: No shared files mode enabled, IPC is disabled 00:04:05.808 EAL: Heap on socket 0 was expanded by 258MB 00:04:05.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.808 EAL: request: mp_malloc_sync 00:04:05.808 EAL: No shared files mode enabled, IPC is disabled 00:04:05.808 EAL: Heap on socket 0 was shrunk by 258MB 00:04:05.808 EAL: Trying to obtain current memory policy. 00:04:05.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.808 EAL: Restoring previous memory policy: 4 00:04:05.808 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.808 EAL: request: mp_malloc_sync 00:04:05.808 EAL: No shared files mode enabled, IPC is disabled 00:04:05.808 EAL: Heap on socket 0 was expanded by 514MB 00:04:06.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.066 EAL: request: mp_malloc_sync 00:04:06.066 EAL: No shared files mode enabled, IPC is disabled 00:04:06.066 EAL: Heap on socket 0 was shrunk by 514MB 00:04:06.066 EAL: Trying to obtain current memory policy. 00:04:06.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.323 EAL: Restoring previous memory policy: 4 00:04:06.323 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.323 EAL: request: mp_malloc_sync 00:04:06.323 EAL: No shared files mode enabled, IPC is disabled 00:04:06.323 EAL: Heap on socket 0 was expanded by 1026MB 00:04:06.323 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.581 passed 00:04:06.581 00:04:06.581 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.581 suites 1 1 n/a 0 0 00:04:06.581 tests 2 2 2 0 0 00:04:06.581 asserts 5337 5337 5337 0 n/a 00:04:06.581 00:04:06.581 Elapsed time = 1.051 seconds 00:04:06.581 EAL: request: mp_malloc_sync 00:04:06.581 EAL: No shared files mode enabled, IPC is disabled 00:04:06.581 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:06.581 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.581 EAL: request: mp_malloc_sync 00:04:06.581 EAL: No shared files mode enabled, IPC is disabled 00:04:06.581 EAL: Heap on socket 0 was shrunk by 2MB 00:04:06.581 EAL: No shared files mode enabled, IPC is disabled 00:04:06.581 EAL: No shared files mode enabled, IPC is disabled 00:04:06.581 EAL: No shared files mode enabled, IPC is disabled 00:04:06.581 00:04:06.581 real 0m1.241s 00:04:06.581 user 0m0.664s 00:04:06.581 sys 0m0.445s 00:04:06.581 19:42:35 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.581 ************************************ 00:04:06.581 END TEST env_vtophys 00:04:06.581 ************************************ 00:04:06.581 19:42:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:06.581 19:42:35 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:06.581 19:42:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.581 19:42:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.581 19:42:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.581 ************************************ 00:04:06.581 START TEST env_pci 00:04:06.581 ************************************ 00:04:06.581 19:42:35 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:06.581 00:04:06.581 00:04:06.581 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.581 http://cunit.sourceforge.net/ 00:04:06.581 00:04:06.581 00:04:06.581 Suite: pci 00:04:06.581 Test: pci_hook ...[2024-07-24 19:42:35.184095] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58748 has claimed it 00:04:06.581 passed 00:04:06.581 00:04:06.581 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.581 suites 1 1 n/a 0 0 00:04:06.581 EAL: Cannot find device (10000:00:01.0) 00:04:06.581 EAL: Failed to attach device on primary process 00:04:06.581 tests 1 1 1 0 0 00:04:06.581 asserts 25 25 25 0 n/a 00:04:06.581 00:04:06.581 Elapsed time = 0.002 seconds 00:04:06.581 00:04:06.581 real 0m0.024s 00:04:06.581 user 0m0.014s 00:04:06.581 sys 0m0.010s 00:04:06.581 19:42:35 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.581 19:42:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:06.581 ************************************ 00:04:06.581 END TEST env_pci 00:04:06.581 ************************************ 00:04:06.581 19:42:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:06.581 19:42:35 env -- env/env.sh@15 -- # uname 00:04:06.581 19:42:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:06.581 19:42:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:06.581 19:42:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:06.581 19:42:35 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:06.582 19:42:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.582 19:42:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.582 ************************************ 00:04:06.582 START TEST env_dpdk_post_init 00:04:06.582 ************************************ 00:04:06.582 19:42:35 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:06.848 EAL: Detected CPU lcores: 10 00:04:06.849 EAL: Detected NUMA nodes: 1 00:04:06.849 EAL: Detected shared linkage of DPDK 00:04:06.849 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.849 EAL: Selected IOVA mode 'PA' 00:04:06.849 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.849 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:06.849 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:06.849 Starting DPDK initialization... 00:04:06.849 Starting SPDK post initialization... 00:04:06.849 SPDK NVMe probe 00:04:06.849 Attaching to 0000:00:10.0 00:04:06.849 Attaching to 0000:00:11.0 00:04:06.849 Attached to 0000:00:10.0 00:04:06.849 Attached to 0000:00:11.0 00:04:06.849 Cleaning up... 00:04:06.849 00:04:06.849 real 0m0.180s 00:04:06.849 user 0m0.040s 00:04:06.849 sys 0m0.041s 00:04:06.849 19:42:35 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.849 19:42:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.849 ************************************ 00:04:06.849 END TEST env_dpdk_post_init 00:04:06.849 ************************************ 00:04:06.849 19:42:35 env -- env/env.sh@26 -- # uname 00:04:06.849 19:42:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:06.849 19:42:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.849 19:42:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.849 19:42:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.849 19:42:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.849 ************************************ 00:04:06.849 START TEST env_mem_callbacks 00:04:06.849 ************************************ 00:04:06.849 19:42:35 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.849 EAL: Detected CPU lcores: 10 00:04:06.849 EAL: Detected NUMA nodes: 1 00:04:06.849 EAL: Detected shared linkage of DPDK 00:04:06.849 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.849 EAL: Selected IOVA mode 'PA' 00:04:07.107 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.107 00:04:07.107 00:04:07.107 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.107 http://cunit.sourceforge.net/ 00:04:07.107 00:04:07.107 00:04:07.107 Suite: memory 00:04:07.107 Test: test ... 00:04:07.107 register 0x200000200000 2097152 00:04:07.107 malloc 3145728 00:04:07.107 register 0x200000400000 4194304 00:04:07.107 buf 0x200000500000 len 3145728 PASSED 00:04:07.107 malloc 64 00:04:07.107 buf 0x2000004fff40 len 64 PASSED 00:04:07.107 malloc 4194304 00:04:07.107 register 0x200000800000 6291456 00:04:07.107 buf 0x200000a00000 len 4194304 PASSED 00:04:07.107 free 0x200000500000 3145728 00:04:07.107 free 0x2000004fff40 64 00:04:07.107 unregister 0x200000400000 4194304 PASSED 00:04:07.107 free 0x200000a00000 4194304 00:04:07.107 unregister 0x200000800000 6291456 PASSED 00:04:07.107 malloc 8388608 00:04:07.107 register 0x200000400000 10485760 00:04:07.107 buf 0x200000600000 len 8388608 PASSED 00:04:07.107 free 0x200000600000 8388608 00:04:07.107 unregister 0x200000400000 10485760 PASSED 00:04:07.107 passed 00:04:07.107 00:04:07.107 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.107 suites 1 1 n/a 0 0 00:04:07.107 tests 1 1 1 0 0 00:04:07.107 asserts 15 15 15 0 n/a 00:04:07.107 00:04:07.107 Elapsed time = 0.008 seconds 00:04:07.107 00:04:07.107 real 0m0.151s 00:04:07.107 user 0m0.022s 00:04:07.107 sys 0m0.029s 00:04:07.107 19:42:35 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.107 ************************************ 00:04:07.107 END TEST env_mem_callbacks 00:04:07.107 19:42:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:07.107 ************************************ 00:04:07.107 ************************************ 00:04:07.107 END TEST env 00:04:07.107 ************************************ 00:04:07.107 00:04:07.107 real 0m2.152s 00:04:07.107 user 0m1.064s 00:04:07.107 sys 0m0.755s 00:04:07.107 19:42:35 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.107 19:42:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.107 19:42:35 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:07.107 19:42:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.107 19:42:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.107 19:42:35 -- common/autotest_common.sh@10 -- # set +x 00:04:07.107 ************************************ 00:04:07.107 START TEST rpc 00:04:07.107 ************************************ 00:04:07.107 19:42:35 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:07.366 * Looking for test storage... 00:04:07.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:07.366 19:42:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58863 00:04:07.366 19:42:35 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:07.366 19:42:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.366 19:42:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58863 00:04:07.366 19:42:35 rpc -- common/autotest_common.sh@831 -- # '[' -z 58863 ']' 00:04:07.366 19:42:35 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.366 19:42:35 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:07.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.366 19:42:35 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.366 19:42:35 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:07.366 19:42:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.366 [2024-07-24 19:42:35.851925] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:07.366 [2024-07-24 19:42:35.852064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58863 ] 00:04:07.366 [2024-07-24 19:42:35.997313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.625 [2024-07-24 19:42:36.120564] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:07.625 [2024-07-24 19:42:36.120631] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58863' to capture a snapshot of events at runtime. 00:04:07.625 [2024-07-24 19:42:36.120647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:07.625 [2024-07-24 19:42:36.120660] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:07.625 [2024-07-24 19:42:36.120671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58863 for offline analysis/debug. 00:04:07.625 [2024-07-24 19:42:36.120709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.625 [2024-07-24 19:42:36.169798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:08.190 19:42:36 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:08.190 19:42:36 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:08.190 19:42:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.190 19:42:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.190 19:42:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:08.190 19:42:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:08.190 19:42:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.190 19:42:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.190 19:42:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.190 ************************************ 00:04:08.190 START TEST rpc_integrity 00:04:08.190 ************************************ 00:04:08.190 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:08.190 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.190 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.190 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.190 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.190 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.190 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:08.190 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.190 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.190 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.190 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.190 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.190 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:08.190 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.190 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.190 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.190 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.190 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.190 { 00:04:08.190 "name": "Malloc0", 00:04:08.190 "aliases": [ 00:04:08.190 "bd9419ff-65e9-4edc-9f1c-1dde921e4abc" 00:04:08.190 ], 00:04:08.190 "product_name": "Malloc disk", 00:04:08.190 "block_size": 512, 00:04:08.190 "num_blocks": 16384, 00:04:08.190 "uuid": "bd9419ff-65e9-4edc-9f1c-1dde921e4abc", 00:04:08.190 "assigned_rate_limits": { 00:04:08.190 "rw_ios_per_sec": 0, 00:04:08.190 "rw_mbytes_per_sec": 0, 00:04:08.190 "r_mbytes_per_sec": 0, 00:04:08.190 "w_mbytes_per_sec": 0 00:04:08.190 }, 00:04:08.190 "claimed": false, 00:04:08.190 "zoned": false, 00:04:08.190 "supported_io_types": { 00:04:08.190 "read": true, 00:04:08.190 "write": true, 00:04:08.190 "unmap": true, 00:04:08.190 "flush": true, 00:04:08.190 "reset": true, 00:04:08.190 "nvme_admin": false, 00:04:08.190 "nvme_io": false, 00:04:08.190 "nvme_io_md": false, 00:04:08.190 "write_zeroes": true, 00:04:08.190 "zcopy": true, 00:04:08.190 "get_zone_info": false, 00:04:08.190 "zone_management": false, 00:04:08.190 "zone_append": false, 00:04:08.190 "compare": false, 00:04:08.190 "compare_and_write": false, 00:04:08.190 "abort": true, 00:04:08.190 "seek_hole": false, 00:04:08.190 "seek_data": false, 00:04:08.190 "copy": true, 00:04:08.190 "nvme_iov_md": false 00:04:08.190 }, 00:04:08.190 "memory_domains": [ 00:04:08.190 { 00:04:08.190 "dma_device_id": "system", 00:04:08.190 "dma_device_type": 1 00:04:08.190 }, 00:04:08.190 { 00:04:08.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.190 "dma_device_type": 2 00:04:08.190 } 00:04:08.190 ], 00:04:08.190 "driver_specific": {} 00:04:08.190 } 00:04:08.190 ]' 00:04:08.190 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:08.448 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.448 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:08.448 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.448 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.448 [2024-07-24 19:42:36.879475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:08.448 [2024-07-24 19:42:36.879567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.448 [2024-07-24 19:42:36.879589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6dda0 00:04:08.448 [2024-07-24 19:42:36.879599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.448 [2024-07-24 19:42:36.881535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.448 [2024-07-24 19:42:36.881584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.448 Passthru0 00:04:08.448 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.448 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.448 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.448 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.448 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.448 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.448 { 00:04:08.448 "name": "Malloc0", 00:04:08.448 "aliases": [ 00:04:08.448 "bd9419ff-65e9-4edc-9f1c-1dde921e4abc" 00:04:08.448 ], 00:04:08.448 "product_name": "Malloc disk", 00:04:08.448 "block_size": 512, 00:04:08.448 "num_blocks": 16384, 00:04:08.448 "uuid": "bd9419ff-65e9-4edc-9f1c-1dde921e4abc", 00:04:08.448 "assigned_rate_limits": { 00:04:08.448 "rw_ios_per_sec": 0, 00:04:08.448 "rw_mbytes_per_sec": 0, 00:04:08.448 "r_mbytes_per_sec": 0, 00:04:08.448 "w_mbytes_per_sec": 0 00:04:08.448 }, 00:04:08.448 "claimed": true, 00:04:08.448 "claim_type": "exclusive_write", 00:04:08.448 "zoned": false, 00:04:08.448 "supported_io_types": { 00:04:08.448 "read": true, 00:04:08.448 "write": true, 00:04:08.448 "unmap": true, 00:04:08.448 "flush": true, 00:04:08.448 "reset": true, 00:04:08.448 "nvme_admin": false, 00:04:08.448 "nvme_io": false, 00:04:08.448 "nvme_io_md": false, 00:04:08.448 "write_zeroes": true, 00:04:08.448 "zcopy": true, 00:04:08.448 "get_zone_info": false, 00:04:08.448 "zone_management": false, 00:04:08.448 "zone_append": false, 00:04:08.448 "compare": false, 00:04:08.448 "compare_and_write": false, 00:04:08.448 "abort": true, 00:04:08.448 "seek_hole": false, 00:04:08.448 "seek_data": false, 00:04:08.448 "copy": true, 00:04:08.449 "nvme_iov_md": false 00:04:08.449 }, 00:04:08.449 "memory_domains": [ 00:04:08.449 { 00:04:08.449 "dma_device_id": "system", 00:04:08.449 "dma_device_type": 1 00:04:08.449 }, 00:04:08.449 { 00:04:08.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.449 "dma_device_type": 2 00:04:08.449 } 00:04:08.449 ], 00:04:08.449 "driver_specific": {} 00:04:08.449 }, 00:04:08.449 { 00:04:08.449 "name": "Passthru0", 00:04:08.449 "aliases": [ 00:04:08.449 "95e97e59-59ff-53be-8bf6-01ae1aec4861" 00:04:08.449 ], 00:04:08.449 "product_name": "passthru", 00:04:08.449 "block_size": 512, 00:04:08.449 "num_blocks": 16384, 00:04:08.449 "uuid": "95e97e59-59ff-53be-8bf6-01ae1aec4861", 00:04:08.449 "assigned_rate_limits": { 00:04:08.449 "rw_ios_per_sec": 0, 00:04:08.449 "rw_mbytes_per_sec": 0, 00:04:08.449 "r_mbytes_per_sec": 0, 00:04:08.449 "w_mbytes_per_sec": 0 00:04:08.449 }, 00:04:08.449 "claimed": false, 00:04:08.449 "zoned": false, 00:04:08.449 "supported_io_types": { 00:04:08.449 "read": true, 00:04:08.449 "write": true, 00:04:08.449 "unmap": true, 00:04:08.449 "flush": true, 00:04:08.449 "reset": true, 00:04:08.449 "nvme_admin": false, 00:04:08.449 "nvme_io": false, 00:04:08.449 "nvme_io_md": false, 00:04:08.449 "write_zeroes": true, 00:04:08.449 "zcopy": true, 00:04:08.449 "get_zone_info": false, 00:04:08.449 "zone_management": false, 00:04:08.449 "zone_append": false, 00:04:08.449 "compare": false, 00:04:08.449 "compare_and_write": false, 00:04:08.449 "abort": true, 00:04:08.449 "seek_hole": false, 00:04:08.449 "seek_data": false, 00:04:08.449 "copy": true, 00:04:08.449 "nvme_iov_md": false 00:04:08.449 }, 00:04:08.449 "memory_domains": [ 00:04:08.449 { 00:04:08.449 "dma_device_id": "system", 00:04:08.449 "dma_device_type": 1 00:04:08.449 }, 00:04:08.449 { 00:04:08.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.449 "dma_device_type": 2 00:04:08.449 } 00:04:08.449 ], 00:04:08.449 "driver_specific": { 00:04:08.449 "passthru": { 00:04:08.449 "name": "Passthru0", 00:04:08.449 "base_bdev_name": "Malloc0" 00:04:08.449 } 00:04:08.449 } 00:04:08.449 } 00:04:08.449 ]' 00:04:08.449 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:08.449 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.449 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.449 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.449 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.449 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.449 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:08.449 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.449 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.449 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.449 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.449 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.449 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.449 19:42:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.449 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.449 19:42:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:08.449 ************************************ 00:04:08.449 END TEST rpc_integrity 00:04:08.449 ************************************ 00:04:08.449 19:42:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.449 00:04:08.449 real 0m0.269s 00:04:08.449 user 0m0.172s 00:04:08.449 sys 0m0.031s 00:04:08.449 19:42:37 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.449 19:42:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.449 19:42:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:08.449 19:42:37 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.449 19:42:37 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.449 19:42:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.449 ************************************ 00:04:08.449 START TEST rpc_plugins 00:04:08.449 ************************************ 00:04:08.449 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:08.449 19:42:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:08.449 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.449 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.449 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.449 19:42:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:08.449 19:42:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:08.449 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.449 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.449 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.449 19:42:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:08.449 { 00:04:08.449 "name": "Malloc1", 00:04:08.449 "aliases": [ 00:04:08.449 "b9ac24e5-a26e-4bb9-8d8a-4485bec9052b" 00:04:08.449 ], 00:04:08.449 "product_name": "Malloc disk", 00:04:08.449 "block_size": 4096, 00:04:08.449 "num_blocks": 256, 00:04:08.449 "uuid": "b9ac24e5-a26e-4bb9-8d8a-4485bec9052b", 00:04:08.449 "assigned_rate_limits": { 00:04:08.449 "rw_ios_per_sec": 0, 00:04:08.449 "rw_mbytes_per_sec": 0, 00:04:08.449 "r_mbytes_per_sec": 0, 00:04:08.449 "w_mbytes_per_sec": 0 00:04:08.449 }, 00:04:08.449 "claimed": false, 00:04:08.449 "zoned": false, 00:04:08.449 "supported_io_types": { 00:04:08.449 "read": true, 00:04:08.449 "write": true, 00:04:08.449 "unmap": true, 00:04:08.449 "flush": true, 00:04:08.449 "reset": true, 00:04:08.449 "nvme_admin": false, 00:04:08.449 "nvme_io": false, 00:04:08.449 "nvme_io_md": false, 00:04:08.449 "write_zeroes": true, 00:04:08.449 "zcopy": true, 00:04:08.449 "get_zone_info": false, 00:04:08.449 "zone_management": false, 00:04:08.449 "zone_append": false, 00:04:08.449 "compare": false, 00:04:08.449 "compare_and_write": false, 00:04:08.449 "abort": true, 00:04:08.449 "seek_hole": false, 00:04:08.449 "seek_data": false, 00:04:08.449 "copy": true, 00:04:08.449 "nvme_iov_md": false 00:04:08.449 }, 00:04:08.449 "memory_domains": [ 00:04:08.449 { 00:04:08.449 "dma_device_id": "system", 00:04:08.449 "dma_device_type": 1 00:04:08.449 }, 00:04:08.449 { 00:04:08.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.449 "dma_device_type": 2 00:04:08.449 } 00:04:08.449 ], 00:04:08.449 "driver_specific": {} 00:04:08.449 } 00:04:08.449 ]' 00:04:08.449 19:42:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:08.707 19:42:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:08.707 19:42:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:08.707 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.707 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.707 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.707 19:42:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:08.707 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.707 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.707 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.707 19:42:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:08.707 19:42:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:08.707 ************************************ 00:04:08.707 END TEST rpc_plugins 00:04:08.707 ************************************ 00:04:08.707 19:42:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:08.707 00:04:08.707 real 0m0.171s 00:04:08.707 user 0m0.115s 00:04:08.707 sys 0m0.017s 00:04:08.707 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.707 19:42:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.707 19:42:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:08.708 19:42:37 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.708 19:42:37 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.708 19:42:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.708 ************************************ 00:04:08.708 START TEST rpc_trace_cmd_test 00:04:08.708 ************************************ 00:04:08.708 19:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:08.708 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:08.708 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:08.708 19:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.708 19:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:08.708 19:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.708 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:08.708 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58863", 00:04:08.708 "tpoint_group_mask": "0x8", 00:04:08.708 "iscsi_conn": { 00:04:08.708 "mask": "0x2", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 }, 00:04:08.708 "scsi": { 00:04:08.708 "mask": "0x4", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 }, 00:04:08.708 "bdev": { 00:04:08.708 "mask": "0x8", 00:04:08.708 "tpoint_mask": "0xffffffffffffffff" 00:04:08.708 }, 00:04:08.708 "nvmf_rdma": { 00:04:08.708 "mask": "0x10", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 }, 00:04:08.708 "nvmf_tcp": { 00:04:08.708 "mask": "0x20", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 }, 00:04:08.708 "ftl": { 00:04:08.708 "mask": "0x40", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 }, 00:04:08.708 "blobfs": { 00:04:08.708 "mask": "0x80", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 }, 00:04:08.708 "dsa": { 00:04:08.708 "mask": "0x200", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 }, 00:04:08.708 "thread": { 00:04:08.708 "mask": "0x400", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 }, 00:04:08.708 "nvme_pcie": { 00:04:08.708 "mask": "0x800", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 }, 00:04:08.708 "iaa": { 00:04:08.708 "mask": "0x1000", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 }, 00:04:08.708 "nvme_tcp": { 00:04:08.708 "mask": "0x2000", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 }, 00:04:08.708 "bdev_nvme": { 00:04:08.708 "mask": "0x4000", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 }, 00:04:08.708 "sock": { 00:04:08.708 "mask": "0x8000", 00:04:08.708 "tpoint_mask": "0x0" 00:04:08.708 } 00:04:08.708 }' 00:04:08.708 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:08.708 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:08.708 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:08.966 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:08.966 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:08.966 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:08.966 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:08.966 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:08.966 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:08.966 ************************************ 00:04:08.966 END TEST rpc_trace_cmd_test 00:04:08.966 ************************************ 00:04:08.966 19:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:08.966 00:04:08.966 real 0m0.242s 00:04:08.966 user 0m0.199s 00:04:08.966 sys 0m0.032s 00:04:08.966 19:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.966 19:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:08.966 19:42:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:08.966 19:42:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:08.966 19:42:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:08.966 19:42:37 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.966 19:42:37 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.966 19:42:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.966 ************************************ 00:04:08.966 START TEST rpc_daemon_integrity 00:04:08.966 ************************************ 00:04:08.966 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:08.966 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.966 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.966 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.966 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.966 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.966 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.225 { 00:04:09.225 "name": "Malloc2", 00:04:09.225 "aliases": [ 00:04:09.225 "e3923c1b-022d-4f7c-a6d0-ae3354a269d5" 00:04:09.225 ], 00:04:09.225 "product_name": "Malloc disk", 00:04:09.225 "block_size": 512, 00:04:09.225 "num_blocks": 16384, 00:04:09.225 "uuid": "e3923c1b-022d-4f7c-a6d0-ae3354a269d5", 00:04:09.225 "assigned_rate_limits": { 00:04:09.225 "rw_ios_per_sec": 0, 00:04:09.225 "rw_mbytes_per_sec": 0, 00:04:09.225 "r_mbytes_per_sec": 0, 00:04:09.225 "w_mbytes_per_sec": 0 00:04:09.225 }, 00:04:09.225 "claimed": false, 00:04:09.225 "zoned": false, 00:04:09.225 "supported_io_types": { 00:04:09.225 "read": true, 00:04:09.225 "write": true, 00:04:09.225 "unmap": true, 00:04:09.225 "flush": true, 00:04:09.225 "reset": true, 00:04:09.225 "nvme_admin": false, 00:04:09.225 "nvme_io": false, 00:04:09.225 "nvme_io_md": false, 00:04:09.225 "write_zeroes": true, 00:04:09.225 "zcopy": true, 00:04:09.225 "get_zone_info": false, 00:04:09.225 "zone_management": false, 00:04:09.225 "zone_append": false, 00:04:09.225 "compare": false, 00:04:09.225 "compare_and_write": false, 00:04:09.225 "abort": true, 00:04:09.225 "seek_hole": false, 00:04:09.225 "seek_data": false, 00:04:09.225 "copy": true, 00:04:09.225 "nvme_iov_md": false 00:04:09.225 }, 00:04:09.225 "memory_domains": [ 00:04:09.225 { 00:04:09.225 "dma_device_id": "system", 00:04:09.225 "dma_device_type": 1 00:04:09.225 }, 00:04:09.225 { 00:04:09.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.225 "dma_device_type": 2 00:04:09.225 } 00:04:09.225 ], 00:04:09.225 "driver_specific": {} 00:04:09.225 } 00:04:09.225 ]' 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.225 [2024-07-24 19:42:37.730297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:09.225 [2024-07-24 19:42:37.730378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.225 [2024-07-24 19:42:37.730403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cd2be0 00:04:09.225 [2024-07-24 19:42:37.730417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.225 [2024-07-24 19:42:37.732044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.225 [2024-07-24 19:42:37.732097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.225 Passthru0 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.225 { 00:04:09.225 "name": "Malloc2", 00:04:09.225 "aliases": [ 00:04:09.225 "e3923c1b-022d-4f7c-a6d0-ae3354a269d5" 00:04:09.225 ], 00:04:09.225 "product_name": "Malloc disk", 00:04:09.225 "block_size": 512, 00:04:09.225 "num_blocks": 16384, 00:04:09.225 "uuid": "e3923c1b-022d-4f7c-a6d0-ae3354a269d5", 00:04:09.225 "assigned_rate_limits": { 00:04:09.225 "rw_ios_per_sec": 0, 00:04:09.225 "rw_mbytes_per_sec": 0, 00:04:09.225 "r_mbytes_per_sec": 0, 00:04:09.225 "w_mbytes_per_sec": 0 00:04:09.225 }, 00:04:09.225 "claimed": true, 00:04:09.225 "claim_type": "exclusive_write", 00:04:09.225 "zoned": false, 00:04:09.225 "supported_io_types": { 00:04:09.225 "read": true, 00:04:09.225 "write": true, 00:04:09.225 "unmap": true, 00:04:09.225 "flush": true, 00:04:09.225 "reset": true, 00:04:09.225 "nvme_admin": false, 00:04:09.225 "nvme_io": false, 00:04:09.225 "nvme_io_md": false, 00:04:09.225 "write_zeroes": true, 00:04:09.225 "zcopy": true, 00:04:09.225 "get_zone_info": false, 00:04:09.225 "zone_management": false, 00:04:09.225 "zone_append": false, 00:04:09.225 "compare": false, 00:04:09.225 "compare_and_write": false, 00:04:09.225 "abort": true, 00:04:09.225 "seek_hole": false, 00:04:09.225 "seek_data": false, 00:04:09.225 "copy": true, 00:04:09.225 "nvme_iov_md": false 00:04:09.225 }, 00:04:09.225 "memory_domains": [ 00:04:09.225 { 00:04:09.225 "dma_device_id": "system", 00:04:09.225 "dma_device_type": 1 00:04:09.225 }, 00:04:09.225 { 00:04:09.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.225 "dma_device_type": 2 00:04:09.225 } 00:04:09.225 ], 00:04:09.225 "driver_specific": {} 00:04:09.225 }, 00:04:09.225 { 00:04:09.225 "name": "Passthru0", 00:04:09.225 "aliases": [ 00:04:09.225 "2fca9060-088d-5f03-a074-26a800cc6c7d" 00:04:09.225 ], 00:04:09.225 "product_name": "passthru", 00:04:09.225 "block_size": 512, 00:04:09.225 "num_blocks": 16384, 00:04:09.225 "uuid": "2fca9060-088d-5f03-a074-26a800cc6c7d", 00:04:09.225 "assigned_rate_limits": { 00:04:09.225 "rw_ios_per_sec": 0, 00:04:09.225 "rw_mbytes_per_sec": 0, 00:04:09.225 "r_mbytes_per_sec": 0, 00:04:09.225 "w_mbytes_per_sec": 0 00:04:09.225 }, 00:04:09.225 "claimed": false, 00:04:09.225 "zoned": false, 00:04:09.225 "supported_io_types": { 00:04:09.225 "read": true, 00:04:09.225 "write": true, 00:04:09.225 "unmap": true, 00:04:09.225 "flush": true, 00:04:09.225 "reset": true, 00:04:09.225 "nvme_admin": false, 00:04:09.225 "nvme_io": false, 00:04:09.225 "nvme_io_md": false, 00:04:09.225 "write_zeroes": true, 00:04:09.225 "zcopy": true, 00:04:09.225 "get_zone_info": false, 00:04:09.225 "zone_management": false, 00:04:09.225 "zone_append": false, 00:04:09.225 "compare": false, 00:04:09.225 "compare_and_write": false, 00:04:09.225 "abort": true, 00:04:09.225 "seek_hole": false, 00:04:09.225 "seek_data": false, 00:04:09.225 "copy": true, 00:04:09.225 "nvme_iov_md": false 00:04:09.225 }, 00:04:09.225 "memory_domains": [ 00:04:09.225 { 00:04:09.225 "dma_device_id": "system", 00:04:09.225 "dma_device_type": 1 00:04:09.225 }, 00:04:09.225 { 00:04:09.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.225 "dma_device_type": 2 00:04:09.225 } 00:04:09.225 ], 00:04:09.225 "driver_specific": { 00:04:09.225 "passthru": { 00:04:09.225 "name": "Passthru0", 00:04:09.225 "base_bdev_name": "Malloc2" 00:04:09.225 } 00:04:09.225 } 00:04:09.225 } 00:04:09.225 ]' 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:09.225 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.226 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.226 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.226 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.226 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.226 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.226 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.226 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.226 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.484 ************************************ 00:04:09.484 END TEST rpc_daemon_integrity 00:04:09.484 ************************************ 00:04:09.484 19:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.484 00:04:09.484 real 0m0.310s 00:04:09.484 user 0m0.202s 00:04:09.484 sys 0m0.042s 00:04:09.484 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.484 19:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.484 19:42:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:09.484 19:42:37 rpc -- rpc/rpc.sh@84 -- # killprocess 58863 00:04:09.484 19:42:37 rpc -- common/autotest_common.sh@950 -- # '[' -z 58863 ']' 00:04:09.484 19:42:37 rpc -- common/autotest_common.sh@954 -- # kill -0 58863 00:04:09.484 19:42:37 rpc -- common/autotest_common.sh@955 -- # uname 00:04:09.484 19:42:37 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:09.484 19:42:37 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58863 00:04:09.484 killing process with pid 58863 00:04:09.484 19:42:37 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:09.484 19:42:37 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:09.484 19:42:37 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58863' 00:04:09.484 19:42:37 rpc -- common/autotest_common.sh@969 -- # kill 58863 00:04:09.484 19:42:37 rpc -- common/autotest_common.sh@974 -- # wait 58863 00:04:09.742 00:04:09.742 real 0m2.602s 00:04:09.742 user 0m3.274s 00:04:09.742 sys 0m0.662s 00:04:09.742 19:42:38 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.742 19:42:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.742 ************************************ 00:04:09.742 END TEST rpc 00:04:09.742 ************************************ 00:04:09.742 19:42:38 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:09.742 19:42:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.742 19:42:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.742 19:42:38 -- common/autotest_common.sh@10 -- # set +x 00:04:09.742 ************************************ 00:04:09.742 START TEST skip_rpc 00:04:09.742 ************************************ 00:04:09.742 19:42:38 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:10.000 * Looking for test storage... 00:04:10.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.000 19:42:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:10.000 19:42:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:10.000 19:42:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:10.000 19:42:38 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.000 19:42:38 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.000 19:42:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.000 ************************************ 00:04:10.000 START TEST skip_rpc 00:04:10.000 ************************************ 00:04:10.000 19:42:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:10.000 19:42:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59054 00:04:10.000 19:42:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.000 19:42:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:10.000 19:42:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:10.000 [2024-07-24 19:42:38.493439] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:10.000 [2024-07-24 19:42:38.493528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59054 ] 00:04:10.000 [2024-07-24 19:42:38.631845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.258 [2024-07-24 19:42:38.740555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.258 [2024-07-24 19:42:38.786618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59054 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 59054 ']' 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 59054 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59054 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:15.587 killing process with pid 59054 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59054' 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 59054 00:04:15.587 19:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 59054 00:04:15.587 00:04:15.587 real 0m5.647s 00:04:15.587 user 0m5.288s 00:04:15.587 sys 0m0.256s 00:04:15.587 ************************************ 00:04:15.587 END TEST skip_rpc 00:04:15.587 ************************************ 00:04:15.587 19:42:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.587 19:42:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.587 19:42:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:15.587 19:42:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.587 19:42:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.587 19:42:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.587 ************************************ 00:04:15.587 START TEST skip_rpc_with_json 00:04:15.587 ************************************ 00:04:15.587 19:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:15.587 19:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:15.587 19:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59142 00:04:15.587 19:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.587 19:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59142 00:04:15.587 19:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 59142 ']' 00:04:15.587 19:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:15.587 19:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.587 19:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:15.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.587 19:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.587 19:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:15.587 19:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.587 [2024-07-24 19:42:44.203920] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:15.587 [2024-07-24 19:42:44.204056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59142 ] 00:04:15.846 [2024-07-24 19:42:44.340761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.846 [2024-07-24 19:42:44.451712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.846 [2024-07-24 19:42:44.496248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:16.782 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:16.782 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:16.782 19:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:16.782 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.782 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.782 [2024-07-24 19:42:45.166807] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:16.782 request: 00:04:16.782 { 00:04:16.782 "trtype": "tcp", 00:04:16.782 "method": "nvmf_get_transports", 00:04:16.782 "req_id": 1 00:04:16.782 } 00:04:16.782 Got JSON-RPC error response 00:04:16.782 response: 00:04:16.782 { 00:04:16.782 "code": -19, 00:04:16.782 "message": "No such device" 00:04:16.782 } 00:04:16.782 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:16.782 19:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:16.782 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.782 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.782 [2024-07-24 19:42:45.178921] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.782 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.782 19:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:16.782 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.783 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.783 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.783 19:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.783 { 00:04:16.783 "subsystems": [ 00:04:16.783 { 00:04:16.783 "subsystem": "keyring", 00:04:16.783 "config": [] 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "subsystem": "iobuf", 00:04:16.783 "config": [ 00:04:16.783 { 00:04:16.783 "method": "iobuf_set_options", 00:04:16.783 "params": { 00:04:16.783 "small_pool_count": 8192, 00:04:16.783 "large_pool_count": 1024, 00:04:16.783 "small_bufsize": 8192, 00:04:16.783 "large_bufsize": 135168 00:04:16.783 } 00:04:16.783 } 00:04:16.783 ] 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "subsystem": "sock", 00:04:16.783 "config": [ 00:04:16.783 { 00:04:16.783 "method": "sock_set_default_impl", 00:04:16.783 "params": { 00:04:16.783 "impl_name": "uring" 00:04:16.783 } 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "method": "sock_impl_set_options", 00:04:16.783 "params": { 00:04:16.783 "impl_name": "ssl", 00:04:16.783 "recv_buf_size": 4096, 00:04:16.783 "send_buf_size": 4096, 00:04:16.783 "enable_recv_pipe": true, 00:04:16.783 "enable_quickack": false, 00:04:16.783 "enable_placement_id": 0, 00:04:16.783 "enable_zerocopy_send_server": true, 00:04:16.783 "enable_zerocopy_send_client": false, 00:04:16.783 "zerocopy_threshold": 0, 00:04:16.783 "tls_version": 0, 00:04:16.783 "enable_ktls": false 00:04:16.783 } 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "method": "sock_impl_set_options", 00:04:16.783 "params": { 00:04:16.783 "impl_name": "posix", 00:04:16.783 "recv_buf_size": 2097152, 00:04:16.783 "send_buf_size": 2097152, 00:04:16.783 "enable_recv_pipe": true, 00:04:16.783 "enable_quickack": false, 00:04:16.783 "enable_placement_id": 0, 00:04:16.783 "enable_zerocopy_send_server": true, 00:04:16.783 "enable_zerocopy_send_client": false, 00:04:16.783 "zerocopy_threshold": 0, 00:04:16.783 "tls_version": 0, 00:04:16.783 "enable_ktls": false 00:04:16.783 } 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "method": "sock_impl_set_options", 00:04:16.783 "params": { 00:04:16.783 "impl_name": "uring", 00:04:16.783 "recv_buf_size": 2097152, 00:04:16.783 "send_buf_size": 2097152, 00:04:16.783 "enable_recv_pipe": true, 00:04:16.783 "enable_quickack": false, 00:04:16.783 "enable_placement_id": 0, 00:04:16.783 "enable_zerocopy_send_server": false, 00:04:16.783 "enable_zerocopy_send_client": false, 00:04:16.783 "zerocopy_threshold": 0, 00:04:16.783 "tls_version": 0, 00:04:16.783 "enable_ktls": false 00:04:16.783 } 00:04:16.783 } 00:04:16.783 ] 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "subsystem": "vmd", 00:04:16.783 "config": [] 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "subsystem": "accel", 00:04:16.783 "config": [ 00:04:16.783 { 00:04:16.783 "method": "accel_set_options", 00:04:16.783 "params": { 00:04:16.783 "small_cache_size": 128, 00:04:16.783 "large_cache_size": 16, 00:04:16.783 "task_count": 2048, 00:04:16.783 "sequence_count": 2048, 00:04:16.783 "buf_count": 2048 00:04:16.783 } 00:04:16.783 } 00:04:16.783 ] 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "subsystem": "bdev", 00:04:16.783 "config": [ 00:04:16.783 { 00:04:16.783 "method": "bdev_set_options", 00:04:16.783 "params": { 00:04:16.783 "bdev_io_pool_size": 65535, 00:04:16.783 "bdev_io_cache_size": 256, 00:04:16.783 "bdev_auto_examine": true, 00:04:16.783 "iobuf_small_cache_size": 128, 00:04:16.783 "iobuf_large_cache_size": 16 00:04:16.783 } 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "method": "bdev_raid_set_options", 00:04:16.783 "params": { 00:04:16.783 "process_window_size_kb": 1024, 00:04:16.783 "process_max_bandwidth_mb_sec": 0 00:04:16.783 } 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "method": "bdev_iscsi_set_options", 00:04:16.783 "params": { 00:04:16.783 "timeout_sec": 30 00:04:16.783 } 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "method": "bdev_nvme_set_options", 00:04:16.783 "params": { 00:04:16.783 "action_on_timeout": "none", 00:04:16.783 "timeout_us": 0, 00:04:16.783 "timeout_admin_us": 0, 00:04:16.783 "keep_alive_timeout_ms": 10000, 00:04:16.783 "arbitration_burst": 0, 00:04:16.783 "low_priority_weight": 0, 00:04:16.783 "medium_priority_weight": 0, 00:04:16.783 "high_priority_weight": 0, 00:04:16.783 "nvme_adminq_poll_period_us": 10000, 00:04:16.783 "nvme_ioq_poll_period_us": 0, 00:04:16.783 "io_queue_requests": 0, 00:04:16.783 "delay_cmd_submit": true, 00:04:16.783 "transport_retry_count": 4, 00:04:16.783 "bdev_retry_count": 3, 00:04:16.783 "transport_ack_timeout": 0, 00:04:16.783 "ctrlr_loss_timeout_sec": 0, 00:04:16.783 "reconnect_delay_sec": 0, 00:04:16.783 "fast_io_fail_timeout_sec": 0, 00:04:16.783 "disable_auto_failback": false, 00:04:16.783 "generate_uuids": false, 00:04:16.783 "transport_tos": 0, 00:04:16.783 "nvme_error_stat": false, 00:04:16.783 "rdma_srq_size": 0, 00:04:16.783 "io_path_stat": false, 00:04:16.783 "allow_accel_sequence": false, 00:04:16.783 "rdma_max_cq_size": 0, 00:04:16.783 "rdma_cm_event_timeout_ms": 0, 00:04:16.783 "dhchap_digests": [ 00:04:16.783 "sha256", 00:04:16.783 "sha384", 00:04:16.783 "sha512" 00:04:16.783 ], 00:04:16.783 "dhchap_dhgroups": [ 00:04:16.783 "null", 00:04:16.783 "ffdhe2048", 00:04:16.783 "ffdhe3072", 00:04:16.783 "ffdhe4096", 00:04:16.783 "ffdhe6144", 00:04:16.783 "ffdhe8192" 00:04:16.783 ] 00:04:16.783 } 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "method": "bdev_nvme_set_hotplug", 00:04:16.783 "params": { 00:04:16.783 "period_us": 100000, 00:04:16.783 "enable": false 00:04:16.783 } 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "method": "bdev_wait_for_examine" 00:04:16.783 } 00:04:16.783 ] 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "subsystem": "scsi", 00:04:16.783 "config": null 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "subsystem": "scheduler", 00:04:16.783 "config": [ 00:04:16.783 { 00:04:16.783 "method": "framework_set_scheduler", 00:04:16.783 "params": { 00:04:16.783 "name": "static" 00:04:16.783 } 00:04:16.783 } 00:04:16.783 ] 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "subsystem": "vhost_scsi", 00:04:16.783 "config": [] 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "subsystem": "vhost_blk", 00:04:16.783 "config": [] 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "subsystem": "ublk", 00:04:16.783 "config": [] 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "subsystem": "nbd", 00:04:16.783 "config": [] 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "subsystem": "nvmf", 00:04:16.783 "config": [ 00:04:16.783 { 00:04:16.783 "method": "nvmf_set_config", 00:04:16.783 "params": { 00:04:16.783 "discovery_filter": "match_any", 00:04:16.783 "admin_cmd_passthru": { 00:04:16.783 "identify_ctrlr": false 00:04:16.783 } 00:04:16.783 } 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "method": "nvmf_set_max_subsystems", 00:04:16.783 "params": { 00:04:16.783 "max_subsystems": 1024 00:04:16.783 } 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "method": "nvmf_set_crdt", 00:04:16.783 "params": { 00:04:16.783 "crdt1": 0, 00:04:16.783 "crdt2": 0, 00:04:16.783 "crdt3": 0 00:04:16.783 } 00:04:16.783 }, 00:04:16.783 { 00:04:16.783 "method": "nvmf_create_transport", 00:04:16.783 "params": { 00:04:16.783 "trtype": "TCP", 00:04:16.783 "max_queue_depth": 128, 00:04:16.783 "max_io_qpairs_per_ctrlr": 127, 00:04:16.783 "in_capsule_data_size": 4096, 00:04:16.783 "max_io_size": 131072, 00:04:16.783 "io_unit_size": 131072, 00:04:16.783 "max_aq_depth": 128, 00:04:16.783 "num_shared_buffers": 511, 00:04:16.783 "buf_cache_size": 4294967295, 00:04:16.783 "dif_insert_or_strip": false, 00:04:16.783 "zcopy": false, 00:04:16.783 "c2h_success": true, 00:04:16.783 "sock_priority": 0, 00:04:16.783 "abort_timeout_sec": 1, 00:04:16.783 "ack_timeout": 0, 00:04:16.783 "data_wr_pool_size": 0 00:04:16.783 } 00:04:16.784 } 00:04:16.784 ] 00:04:16.784 }, 00:04:16.784 { 00:04:16.784 "subsystem": "iscsi", 00:04:16.784 "config": [ 00:04:16.784 { 00:04:16.784 "method": "iscsi_set_options", 00:04:16.784 "params": { 00:04:16.784 "node_base": "iqn.2016-06.io.spdk", 00:04:16.784 "max_sessions": 128, 00:04:16.784 "max_connections_per_session": 2, 00:04:16.784 "max_queue_depth": 64, 00:04:16.784 "default_time2wait": 2, 00:04:16.784 "default_time2retain": 20, 00:04:16.784 "first_burst_length": 8192, 00:04:16.784 "immediate_data": true, 00:04:16.784 "allow_duplicated_isid": false, 00:04:16.784 "error_recovery_level": 0, 00:04:16.784 "nop_timeout": 60, 00:04:16.784 "nop_in_interval": 30, 00:04:16.784 "disable_chap": false, 00:04:16.784 "require_chap": false, 00:04:16.784 "mutual_chap": false, 00:04:16.784 "chap_group": 0, 00:04:16.784 "max_large_datain_per_connection": 64, 00:04:16.784 "max_r2t_per_connection": 4, 00:04:16.784 "pdu_pool_size": 36864, 00:04:16.784 "immediate_data_pool_size": 16384, 00:04:16.784 "data_out_pool_size": 2048 00:04:16.784 } 00:04:16.784 } 00:04:16.784 ] 00:04:16.784 } 00:04:16.784 ] 00:04:16.784 } 00:04:16.784 19:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:16.784 19:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59142 00:04:16.784 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59142 ']' 00:04:16.784 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59142 00:04:16.784 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:16.784 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.784 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59142 00:04:16.784 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.784 killing process with pid 59142 00:04:16.784 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.784 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59142' 00:04:16.784 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59142 00:04:16.784 19:42:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59142 00:04:17.351 19:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59164 00:04:17.351 19:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:17.351 19:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:22.634 19:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59164 00:04:22.634 19:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59164 ']' 00:04:22.634 19:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59164 00:04:22.634 19:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:22.634 19:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:22.634 19:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59164 00:04:22.634 19:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:22.634 killing process with pid 59164 00:04:22.634 19:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:22.634 19:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59164' 00:04:22.634 19:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59164 00:04:22.634 19:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59164 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:22.634 00:04:22.634 real 0m6.975s 00:04:22.634 user 0m6.713s 00:04:22.634 sys 0m0.619s 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.634 ************************************ 00:04:22.634 END TEST skip_rpc_with_json 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.634 ************************************ 00:04:22.634 19:42:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:22.634 19:42:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.634 19:42:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.634 19:42:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.634 ************************************ 00:04:22.634 START TEST skip_rpc_with_delay 00:04:22.634 ************************************ 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.634 [2024-07-24 19:42:51.243364] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:22.634 [2024-07-24 19:42:51.243560] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:22.634 ************************************ 00:04:22.634 END TEST skip_rpc_with_delay 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:22.634 00:04:22.634 real 0m0.108s 00:04:22.634 user 0m0.062s 00:04:22.634 sys 0m0.043s 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.634 19:42:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:22.634 ************************************ 00:04:22.893 19:42:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:22.893 19:42:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:22.893 19:42:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:22.893 19:42:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.893 19:42:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.893 19:42:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.893 ************************************ 00:04:22.893 START TEST exit_on_failed_rpc_init 00:04:22.893 ************************************ 00:04:22.893 19:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:22.893 19:42:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59279 00:04:22.893 19:42:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.893 19:42:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59279 00:04:22.893 19:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59279 ']' 00:04:22.893 19:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.893 19:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.893 19:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.893 19:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.893 19:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.893 [2024-07-24 19:42:51.398138] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:22.893 [2024-07-24 19:42:51.398276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59279 ] 00:04:22.893 [2024-07-24 19:42:51.540584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.151 [2024-07-24 19:42:51.664882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.151 [2024-07-24 19:42:51.709490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:23.719 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.977 [2024-07-24 19:42:52.453993] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:23.977 [2024-07-24 19:42:52.454146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:04:23.977 [2024-07-24 19:42:52.603356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.236 [2024-07-24 19:42:52.780757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.236 [2024-07-24 19:42:52.780910] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:24.236 [2024-07-24 19:42:52.780930] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:24.236 [2024-07-24 19:42:52.780943] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59279 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59279 ']' 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59279 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59279 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:24.495 killing process with pid 59279 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59279' 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59279 00:04:24.495 19:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59279 00:04:24.754 00:04:24.754 real 0m1.972s 00:04:24.754 user 0m2.423s 00:04:24.754 sys 0m0.437s 00:04:24.754 19:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.754 19:42:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.754 ************************************ 00:04:24.754 END TEST exit_on_failed_rpc_init 00:04:24.754 ************************************ 00:04:24.754 19:42:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:24.754 00:04:24.754 real 0m14.988s 00:04:24.754 user 0m14.588s 00:04:24.754 sys 0m1.535s 00:04:24.754 19:42:53 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.754 19:42:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.754 ************************************ 00:04:24.754 END TEST skip_rpc 00:04:24.754 ************************************ 00:04:24.754 19:42:53 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:24.754 19:42:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.754 19:42:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.754 19:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:24.754 ************************************ 00:04:24.754 START TEST rpc_client 00:04:24.754 ************************************ 00:04:24.754 19:42:53 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:25.015 * Looking for test storage... 00:04:25.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:25.015 19:42:53 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:25.015 OK 00:04:25.015 19:42:53 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:25.015 00:04:25.015 real 0m0.098s 00:04:25.015 user 0m0.049s 00:04:25.015 sys 0m0.055s 00:04:25.015 19:42:53 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.015 19:42:53 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:25.015 ************************************ 00:04:25.015 END TEST rpc_client 00:04:25.015 ************************************ 00:04:25.015 19:42:53 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:25.015 19:42:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.015 19:42:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.015 19:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:25.015 ************************************ 00:04:25.015 START TEST json_config 00:04:25.015 ************************************ 00:04:25.015 19:42:53 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:25.015 19:42:53 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0707769d-9dae-4359-8edf-9efcc4e972e8 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0707769d-9dae-4359-8edf-9efcc4e972e8 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:25.015 19:42:53 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:25.015 19:42:53 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:25.015 19:42:53 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:25.015 19:42:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.015 19:42:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.015 19:42:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.015 19:42:53 json_config -- paths/export.sh@5 -- # export PATH 00:04:25.015 19:42:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@47 -- # : 0 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:25.015 19:42:53 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:25.015 19:42:53 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:25.015 19:42:53 json_config -- json_config/json_config.sh@11 -- # [[ 1 -eq 1 ]] 00:04:25.015 19:42:53 json_config -- json_config/json_config.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:04:25.015 19:42:53 json_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:04:25.015 19:42:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:25.015 19:42:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:25.015 19:42:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:25.015 19:42:53 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:25.015 19:42:53 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:25.016 INFO: JSON configuration test init 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:25.016 19:42:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.016 19:42:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:25.016 19:42:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.016 19:42:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.016 19:42:53 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:25.016 19:42:53 json_config -- json_config/common.sh@9 -- # local app=target 00:04:25.016 19:42:53 json_config -- json_config/common.sh@10 -- # shift 00:04:25.016 19:42:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:25.016 19:42:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:25.016 19:42:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:25.016 19:42:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.016 19:42:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.016 19:42:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59415 00:04:25.016 Waiting for target to run... 00:04:25.016 19:42:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:25.016 19:42:53 json_config -- json_config/common.sh@25 -- # waitforlisten 59415 /var/tmp/spdk_tgt.sock 00:04:25.016 19:42:53 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:25.016 19:42:53 json_config -- common/autotest_common.sh@831 -- # '[' -z 59415 ']' 00:04:25.016 19:42:53 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:25.016 19:42:53 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:25.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:25.016 19:42:53 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:25.016 19:42:53 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:25.016 19:42:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.274 [2024-07-24 19:42:53.681107] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:25.274 [2024-07-24 19:42:53.681202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59415 ] 00:04:25.532 [2024-07-24 19:42:54.036509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.532 [2024-07-24 19:42:54.137984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.098 19:42:54 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:26.098 19:42:54 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:26.098 00:04:26.098 19:42:54 json_config -- json_config/common.sh@26 -- # echo '' 00:04:26.098 19:42:54 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:26.098 19:42:54 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:26.098 19:42:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.098 19:42:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.098 19:42:54 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:26.098 19:42:54 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:26.098 19:42:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.098 19:42:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.098 19:42:54 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:26.098 19:42:54 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:26.098 19:42:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:26.355 [2024-07-24 19:42:54.962285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:26.613 19:42:55 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:26.613 19:42:55 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:26.613 19:42:55 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.613 19:42:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.613 19:42:55 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:26.613 19:42:55 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:26.613 19:42:55 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:26.613 19:42:55 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:26.613 19:42:55 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:26.613 19:42:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@51 -- # sort 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:26.872 19:42:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.872 19:42:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@291 -- # create_iscsi_subsystem_config 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@225 -- # timing_enter create_iscsi_subsystem_config 00:04:26.872 19:42:55 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.872 19:42:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.872 19:42:55 json_config -- json_config/json_config.sh@226 -- # tgt_rpc bdev_malloc_create 64 1024 --name MallocForIscsi0 00:04:26.872 19:42:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 64 1024 --name MallocForIscsi0 00:04:27.131 MallocForIscsi0 00:04:27.131 19:42:55 json_config -- json_config/json_config.sh@227 -- # tgt_rpc iscsi_create_portal_group 1 127.0.0.1:3260 00:04:27.131 19:42:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_portal_group 1 127.0.0.1:3260 00:04:27.697 19:42:56 json_config -- json_config/json_config.sh@228 -- # tgt_rpc iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:04:27.697 19:42:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:04:27.697 19:42:56 json_config -- json_config/json_config.sh@229 -- # tgt_rpc iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:04:27.697 19:42:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:04:27.954 19:42:56 json_config -- json_config/json_config.sh@230 -- # timing_exit create_iscsi_subsystem_config 00:04:27.954 19:42:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.954 19:42:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.212 19:42:56 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:04:28.212 19:42:56 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:28.212 19:42:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.212 19:42:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.212 19:42:56 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:28.212 19:42:56 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.212 19:42:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.470 MallocBdevForConfigChangeCheck 00:04:28.470 19:42:56 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:28.470 19:42:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.470 19:42:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.470 19:42:56 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:28.470 19:42:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.035 INFO: shutting down applications... 00:04:29.035 19:42:57 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:29.035 19:42:57 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:29.035 19:42:57 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:29.035 19:42:57 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:29.035 19:42:57 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:29.293 Calling clear_iscsi_subsystem 00:04:29.293 Calling clear_nvmf_subsystem 00:04:29.293 Calling clear_nbd_subsystem 00:04:29.293 Calling clear_ublk_subsystem 00:04:29.293 Calling clear_vhost_blk_subsystem 00:04:29.293 Calling clear_vhost_scsi_subsystem 00:04:29.293 Calling clear_bdev_subsystem 00:04:29.293 19:42:57 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:29.293 19:42:57 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:29.293 19:42:57 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:29.293 19:42:57 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.293 19:42:57 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:29.293 19:42:57 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:29.551 19:42:58 json_config -- json_config/json_config.sh@349 -- # break 00:04:29.551 19:42:58 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:29.551 19:42:58 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:29.551 19:42:58 json_config -- json_config/common.sh@31 -- # local app=target 00:04:29.551 19:42:58 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.551 19:42:58 json_config -- json_config/common.sh@35 -- # [[ -n 59415 ]] 00:04:29.551 19:42:58 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59415 00:04:29.551 19:42:58 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.551 19:42:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.551 19:42:58 json_config -- json_config/common.sh@41 -- # kill -0 59415 00:04:29.551 19:42:58 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.117 19:42:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.117 19:42:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.117 19:42:58 json_config -- json_config/common.sh@41 -- # kill -0 59415 00:04:30.117 19:42:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.117 19:42:58 json_config -- json_config/common.sh@43 -- # break 00:04:30.117 19:42:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.117 SPDK target shutdown done 00:04:30.117 19:42:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.117 INFO: relaunching applications... 00:04:30.117 19:42:58 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:30.117 19:42:58 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.117 19:42:58 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.117 19:42:58 json_config -- json_config/common.sh@10 -- # shift 00:04:30.117 19:42:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.117 19:42:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.117 19:42:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.117 19:42:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.117 19:42:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.117 19:42:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59602 00:04:30.117 19:42:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.117 Waiting for target to run... 00:04:30.117 19:42:58 json_config -- json_config/common.sh@25 -- # waitforlisten 59602 /var/tmp/spdk_tgt.sock 00:04:30.117 19:42:58 json_config -- common/autotest_common.sh@831 -- # '[' -z 59602 ']' 00:04:30.117 19:42:58 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.117 19:42:58 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.117 19:42:58 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.117 19:42:58 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.117 19:42:58 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.117 19:42:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.117 [2024-07-24 19:42:58.703166] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:30.117 [2024-07-24 19:42:58.703246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59602 ] 00:04:30.409 [2024-07-24 19:42:59.067306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.670 [2024-07-24 19:42:59.148632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.670 [2024-07-24 19:42:59.274480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:30.929 19:42:59 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.929 19:42:59 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:30.929 19:42:59 json_config -- json_config/common.sh@26 -- # echo '' 00:04:30.929 00:04:30.929 19:42:59 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:30.929 INFO: Checking if target configuration is the same... 00:04:30.929 19:42:59 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:30.930 19:42:59 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.930 19:42:59 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:30.930 19:42:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.930 + '[' 2 -ne 2 ']' 00:04:30.930 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:30.930 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:30.930 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:30.930 +++ basename /dev/fd/62 00:04:30.930 ++ mktemp /tmp/62.XXX 00:04:30.930 + tmp_file_1=/tmp/62.vrl 00:04:30.930 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.930 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:30.930 + tmp_file_2=/tmp/spdk_tgt_config.json.wWn 00:04:30.930 + ret=0 00:04:30.930 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.498 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.498 + diff -u /tmp/62.vrl /tmp/spdk_tgt_config.json.wWn 00:04:31.498 + echo 'INFO: JSON config files are the same' 00:04:31.498 INFO: JSON config files are the same 00:04:31.498 + rm /tmp/62.vrl /tmp/spdk_tgt_config.json.wWn 00:04:31.498 + exit 0 00:04:31.498 19:43:00 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:31.498 INFO: changing configuration and checking if this can be detected... 00:04:31.498 19:43:00 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:31.498 19:43:00 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.498 19:43:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.755 19:43:00 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.755 19:43:00 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:31.755 19:43:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.755 + '[' 2 -ne 2 ']' 00:04:31.755 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:31.755 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:31.755 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:31.755 +++ basename /dev/fd/62 00:04:31.755 ++ mktemp /tmp/62.XXX 00:04:31.755 + tmp_file_1=/tmp/62.kKb 00:04:31.755 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.755 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.755 + tmp_file_2=/tmp/spdk_tgt_config.json.hOo 00:04:31.755 + ret=0 00:04:31.755 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.324 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.324 + diff -u /tmp/62.kKb /tmp/spdk_tgt_config.json.hOo 00:04:32.324 + ret=1 00:04:32.324 + echo '=== Start of file: /tmp/62.kKb ===' 00:04:32.324 + cat /tmp/62.kKb 00:04:32.324 + echo '=== End of file: /tmp/62.kKb ===' 00:04:32.324 + echo '' 00:04:32.324 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hOo ===' 00:04:32.324 + cat /tmp/spdk_tgt_config.json.hOo 00:04:32.324 + echo '=== End of file: /tmp/spdk_tgt_config.json.hOo ===' 00:04:32.324 + echo '' 00:04:32.324 + rm /tmp/62.kKb /tmp/spdk_tgt_config.json.hOo 00:04:32.324 + exit 1 00:04:32.324 INFO: configuration change detected. 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@321 -- # [[ -n 59602 ]] 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.324 19:43:00 json_config -- json_config/json_config.sh@327 -- # killprocess 59602 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@950 -- # '[' -z 59602 ']' 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@954 -- # kill -0 59602 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@955 -- # uname 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59602 00:04:32.324 killing process with pid 59602 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59602' 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@969 -- # kill 59602 00:04:32.324 19:43:00 json_config -- common/autotest_common.sh@974 -- # wait 59602 00:04:32.583 19:43:01 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.583 19:43:01 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:32.583 19:43:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.583 19:43:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.842 INFO: Success 00:04:32.842 19:43:01 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:32.842 19:43:01 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:32.842 00:04:32.842 real 0m7.719s 00:04:32.842 user 0m10.682s 00:04:32.842 sys 0m1.740s 00:04:32.842 ************************************ 00:04:32.842 END TEST json_config 00:04:32.842 ************************************ 00:04:32.842 19:43:01 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.842 19:43:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.842 19:43:01 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.842 19:43:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.842 19:43:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.842 19:43:01 -- common/autotest_common.sh@10 -- # set +x 00:04:32.842 ************************************ 00:04:32.842 START TEST json_config_extra_key 00:04:32.842 ************************************ 00:04:32.842 19:43:01 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.842 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0707769d-9dae-4359-8edf-9efcc4e972e8 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0707769d-9dae-4359-8edf-9efcc4e972e8 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.842 19:43:01 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.842 19:43:01 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.842 19:43:01 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.842 19:43:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.842 19:43:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.842 19:43:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.842 19:43:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:32.842 19:43:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.842 19:43:01 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:32.843 19:43:01 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:32.843 19:43:01 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:32.843 19:43:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.843 19:43:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.843 19:43:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.843 19:43:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:32.843 19:43:01 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:32.843 19:43:01 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:32.843 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:32.843 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:32.843 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:32.843 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:32.843 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:32.843 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:32.843 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:32.843 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:32.843 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:32.843 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.843 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:32.843 INFO: launching applications... 00:04:32.843 19:43:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.843 19:43:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:32.843 19:43:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:32.843 19:43:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.843 19:43:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.843 19:43:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.843 19:43:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.843 19:43:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.843 19:43:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59748 00:04:32.843 Waiting for target to run... 00:04:32.843 19:43:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.843 19:43:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59748 /var/tmp/spdk_tgt.sock 00:04:32.843 19:43:01 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59748 ']' 00:04:32.843 19:43:01 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.843 19:43:01 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.843 19:43:01 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.843 19:43:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.843 19:43:01 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.843 19:43:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:32.843 [2024-07-24 19:43:01.478667] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:32.843 [2024-07-24 19:43:01.478764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59748 ] 00:04:33.409 [2024-07-24 19:43:01.865166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.409 [2024-07-24 19:43:01.947776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.409 [2024-07-24 19:43:01.968683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:33.973 19:43:02 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.973 19:43:02 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:33.973 00:04:33.973 INFO: shutting down applications... 00:04:33.973 19:43:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:33.973 19:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:33.973 19:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:33.973 19:43:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:33.973 19:43:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.973 19:43:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59748 ]] 00:04:33.973 19:43:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59748 00:04:33.973 19:43:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.973 19:43:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.973 19:43:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59748 00:04:33.973 19:43:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.537 19:43:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.537 19:43:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.537 19:43:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59748 00:04:34.537 19:43:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.537 19:43:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:34.537 19:43:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.537 SPDK target shutdown done 00:04:34.537 19:43:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.537 Success 00:04:34.537 19:43:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:34.537 00:04:34.537 real 0m1.685s 00:04:34.537 user 0m1.571s 00:04:34.537 sys 0m0.423s 00:04:34.537 19:43:02 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.537 19:43:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.537 ************************************ 00:04:34.537 END TEST json_config_extra_key 00:04:34.537 ************************************ 00:04:34.537 19:43:03 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.537 19:43:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.537 19:43:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.537 19:43:03 -- common/autotest_common.sh@10 -- # set +x 00:04:34.537 ************************************ 00:04:34.537 START TEST alias_rpc 00:04:34.537 ************************************ 00:04:34.537 19:43:03 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.537 * Looking for test storage... 00:04:34.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:34.537 19:43:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:34.538 19:43:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59813 00:04:34.538 19:43:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59813 00:04:34.538 19:43:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.538 19:43:03 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59813 ']' 00:04:34.538 19:43:03 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.538 19:43:03 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.538 19:43:03 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.538 19:43:03 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.538 19:43:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.804 [2024-07-24 19:43:03.209909] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:34.804 [2024-07-24 19:43:03.210021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:04:34.804 [2024-07-24 19:43:03.354499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.060 [2024-07-24 19:43:03.482968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.060 [2024-07-24 19:43:03.533494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:35.626 19:43:04 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.626 19:43:04 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:35.626 19:43:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:35.884 19:43:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59813 00:04:35.884 19:43:04 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59813 ']' 00:04:35.884 19:43:04 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59813 00:04:35.884 19:43:04 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:35.884 19:43:04 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:35.884 19:43:04 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59813 00:04:35.884 killing process with pid 59813 00:04:35.884 19:43:04 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:35.884 19:43:04 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:35.884 19:43:04 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59813' 00:04:35.884 19:43:04 alias_rpc -- common/autotest_common.sh@969 -- # kill 59813 00:04:35.884 19:43:04 alias_rpc -- common/autotest_common.sh@974 -- # wait 59813 00:04:36.142 ************************************ 00:04:36.142 END TEST alias_rpc 00:04:36.142 ************************************ 00:04:36.142 00:04:36.142 real 0m1.677s 00:04:36.142 user 0m1.862s 00:04:36.142 sys 0m0.402s 00:04:36.142 19:43:04 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.142 19:43:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.142 19:43:04 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:36.142 19:43:04 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:36.142 19:43:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.142 19:43:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.142 19:43:04 -- common/autotest_common.sh@10 -- # set +x 00:04:36.142 ************************************ 00:04:36.142 START TEST spdkcli_tcp 00:04:36.142 ************************************ 00:04:36.142 19:43:04 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:36.399 * Looking for test storage... 00:04:36.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:36.399 19:43:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:36.399 19:43:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:36.399 19:43:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:36.399 19:43:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:36.399 19:43:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:36.399 19:43:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:36.399 19:43:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:36.399 19:43:04 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.399 19:43:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.399 19:43:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59883 00:04:36.399 19:43:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:36.399 19:43:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59883 00:04:36.399 19:43:04 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59883 ']' 00:04:36.399 19:43:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.399 19:43:04 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.399 19:43:04 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.399 19:43:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.399 19:43:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.399 [2024-07-24 19:43:04.953288] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:36.399 [2024-07-24 19:43:04.953411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59883 ] 00:04:36.673 [2024-07-24 19:43:05.094483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.673 [2024-07-24 19:43:05.217769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.673 [2024-07-24 19:43:05.217779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.673 [2024-07-24 19:43:05.262544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:37.606 19:43:05 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.606 19:43:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:37.606 19:43:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59900 00:04:37.606 19:43:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:37.606 19:43:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:37.606 [ 00:04:37.606 "bdev_malloc_delete", 00:04:37.606 "bdev_malloc_create", 00:04:37.606 "bdev_null_resize", 00:04:37.606 "bdev_null_delete", 00:04:37.606 "bdev_null_create", 00:04:37.606 "bdev_nvme_cuse_unregister", 00:04:37.606 "bdev_nvme_cuse_register", 00:04:37.606 "bdev_opal_new_user", 00:04:37.606 "bdev_opal_set_lock_state", 00:04:37.606 "bdev_opal_delete", 00:04:37.606 "bdev_opal_get_info", 00:04:37.606 "bdev_opal_create", 00:04:37.606 "bdev_nvme_opal_revert", 00:04:37.606 "bdev_nvme_opal_init", 00:04:37.606 "bdev_nvme_send_cmd", 00:04:37.606 "bdev_nvme_get_path_iostat", 00:04:37.606 "bdev_nvme_get_mdns_discovery_info", 00:04:37.606 "bdev_nvme_stop_mdns_discovery", 00:04:37.606 "bdev_nvme_start_mdns_discovery", 00:04:37.606 "bdev_nvme_set_multipath_policy", 00:04:37.606 "bdev_nvme_set_preferred_path", 00:04:37.606 "bdev_nvme_get_io_paths", 00:04:37.606 "bdev_nvme_remove_error_injection", 00:04:37.606 "bdev_nvme_add_error_injection", 00:04:37.606 "bdev_nvme_get_discovery_info", 00:04:37.606 "bdev_nvme_stop_discovery", 00:04:37.606 "bdev_nvme_start_discovery", 00:04:37.606 "bdev_nvme_get_controller_health_info", 00:04:37.606 "bdev_nvme_disable_controller", 00:04:37.606 "bdev_nvme_enable_controller", 00:04:37.606 "bdev_nvme_reset_controller", 00:04:37.606 "bdev_nvme_get_transport_statistics", 00:04:37.606 "bdev_nvme_apply_firmware", 00:04:37.606 "bdev_nvme_detach_controller", 00:04:37.606 "bdev_nvme_get_controllers", 00:04:37.606 "bdev_nvme_attach_controller", 00:04:37.606 "bdev_nvme_set_hotplug", 00:04:37.606 "bdev_nvme_set_options", 00:04:37.606 "bdev_passthru_delete", 00:04:37.606 "bdev_passthru_create", 00:04:37.606 "bdev_lvol_set_parent_bdev", 00:04:37.606 "bdev_lvol_set_parent", 00:04:37.606 "bdev_lvol_check_shallow_copy", 00:04:37.606 "bdev_lvol_start_shallow_copy", 00:04:37.606 "bdev_lvol_grow_lvstore", 00:04:37.606 "bdev_lvol_get_lvols", 00:04:37.606 "bdev_lvol_get_lvstores", 00:04:37.606 "bdev_lvol_delete", 00:04:37.606 "bdev_lvol_set_read_only", 00:04:37.606 "bdev_lvol_resize", 00:04:37.606 "bdev_lvol_decouple_parent", 00:04:37.606 "bdev_lvol_inflate", 00:04:37.606 "bdev_lvol_rename", 00:04:37.606 "bdev_lvol_clone_bdev", 00:04:37.606 "bdev_lvol_clone", 00:04:37.606 "bdev_lvol_snapshot", 00:04:37.606 "bdev_lvol_create", 00:04:37.606 "bdev_lvol_delete_lvstore", 00:04:37.606 "bdev_lvol_rename_lvstore", 00:04:37.606 "bdev_lvol_create_lvstore", 00:04:37.606 "bdev_raid_set_options", 00:04:37.606 "bdev_raid_remove_base_bdev", 00:04:37.606 "bdev_raid_add_base_bdev", 00:04:37.607 "bdev_raid_delete", 00:04:37.607 "bdev_raid_create", 00:04:37.607 "bdev_raid_get_bdevs", 00:04:37.607 "bdev_error_inject_error", 00:04:37.607 "bdev_error_delete", 00:04:37.607 "bdev_error_create", 00:04:37.607 "bdev_split_delete", 00:04:37.607 "bdev_split_create", 00:04:37.607 "bdev_delay_delete", 00:04:37.607 "bdev_delay_create", 00:04:37.607 "bdev_delay_update_latency", 00:04:37.607 "bdev_zone_block_delete", 00:04:37.607 "bdev_zone_block_create", 00:04:37.607 "blobfs_create", 00:04:37.607 "blobfs_detect", 00:04:37.607 "blobfs_set_cache_size", 00:04:37.607 "bdev_aio_delete", 00:04:37.607 "bdev_aio_rescan", 00:04:37.607 "bdev_aio_create", 00:04:37.607 "bdev_ftl_set_property", 00:04:37.607 "bdev_ftl_get_properties", 00:04:37.607 "bdev_ftl_get_stats", 00:04:37.607 "bdev_ftl_unmap", 00:04:37.607 "bdev_ftl_unload", 00:04:37.607 "bdev_ftl_delete", 00:04:37.607 "bdev_ftl_load", 00:04:37.607 "bdev_ftl_create", 00:04:37.607 "bdev_virtio_attach_controller", 00:04:37.607 "bdev_virtio_scsi_get_devices", 00:04:37.607 "bdev_virtio_detach_controller", 00:04:37.607 "bdev_virtio_blk_set_hotplug", 00:04:37.607 "bdev_iscsi_delete", 00:04:37.607 "bdev_iscsi_create", 00:04:37.607 "bdev_iscsi_set_options", 00:04:37.607 "bdev_uring_delete", 00:04:37.607 "bdev_uring_rescan", 00:04:37.607 "bdev_uring_create", 00:04:37.607 "accel_error_inject_error", 00:04:37.607 "ioat_scan_accel_module", 00:04:37.607 "dsa_scan_accel_module", 00:04:37.607 "iaa_scan_accel_module", 00:04:37.607 "keyring_file_remove_key", 00:04:37.607 "keyring_file_add_key", 00:04:37.607 "keyring_linux_set_options", 00:04:37.607 "iscsi_get_histogram", 00:04:37.607 "iscsi_enable_histogram", 00:04:37.607 "iscsi_set_options", 00:04:37.607 "iscsi_get_auth_groups", 00:04:37.607 "iscsi_auth_group_remove_secret", 00:04:37.607 "iscsi_auth_group_add_secret", 00:04:37.607 "iscsi_delete_auth_group", 00:04:37.607 "iscsi_create_auth_group", 00:04:37.607 "iscsi_set_discovery_auth", 00:04:37.607 "iscsi_get_options", 00:04:37.607 "iscsi_target_node_request_logout", 00:04:37.607 "iscsi_target_node_set_redirect", 00:04:37.607 "iscsi_target_node_set_auth", 00:04:37.607 "iscsi_target_node_add_lun", 00:04:37.607 "iscsi_get_stats", 00:04:37.607 "iscsi_get_connections", 00:04:37.607 "iscsi_portal_group_set_auth", 00:04:37.607 "iscsi_start_portal_group", 00:04:37.607 "iscsi_delete_portal_group", 00:04:37.607 "iscsi_create_portal_group", 00:04:37.607 "iscsi_get_portal_groups", 00:04:37.607 "iscsi_delete_target_node", 00:04:37.607 "iscsi_target_node_remove_pg_ig_maps", 00:04:37.607 "iscsi_target_node_add_pg_ig_maps", 00:04:37.607 "iscsi_create_target_node", 00:04:37.607 "iscsi_get_target_nodes", 00:04:37.607 "iscsi_delete_initiator_group", 00:04:37.607 "iscsi_initiator_group_remove_initiators", 00:04:37.607 "iscsi_initiator_group_add_initiators", 00:04:37.607 "iscsi_create_initiator_group", 00:04:37.607 "iscsi_get_initiator_groups", 00:04:37.607 "nvmf_set_crdt", 00:04:37.607 "nvmf_set_config", 00:04:37.607 "nvmf_set_max_subsystems", 00:04:37.607 "nvmf_stop_mdns_prr", 00:04:37.607 "nvmf_publish_mdns_prr", 00:04:37.607 "nvmf_subsystem_get_listeners", 00:04:37.607 "nvmf_subsystem_get_qpairs", 00:04:37.607 "nvmf_subsystem_get_controllers", 00:04:37.607 "nvmf_get_stats", 00:04:37.607 "nvmf_get_transports", 00:04:37.607 "nvmf_create_transport", 00:04:37.607 "nvmf_get_targets", 00:04:37.607 "nvmf_delete_target", 00:04:37.607 "nvmf_create_target", 00:04:37.607 "nvmf_subsystem_allow_any_host", 00:04:37.607 "nvmf_subsystem_remove_host", 00:04:37.607 "nvmf_subsystem_add_host", 00:04:37.607 "nvmf_ns_remove_host", 00:04:37.607 "nvmf_ns_add_host", 00:04:37.607 "nvmf_subsystem_remove_ns", 00:04:37.607 "nvmf_subsystem_add_ns", 00:04:37.607 "nvmf_subsystem_listener_set_ana_state", 00:04:37.607 "nvmf_discovery_get_referrals", 00:04:37.607 "nvmf_discovery_remove_referral", 00:04:37.607 "nvmf_discovery_add_referral", 00:04:37.607 "nvmf_subsystem_remove_listener", 00:04:37.607 "nvmf_subsystem_add_listener", 00:04:37.607 "nvmf_delete_subsystem", 00:04:37.607 "nvmf_create_subsystem", 00:04:37.607 "nvmf_get_subsystems", 00:04:37.607 "env_dpdk_get_mem_stats", 00:04:37.607 "nbd_get_disks", 00:04:37.607 "nbd_stop_disk", 00:04:37.607 "nbd_start_disk", 00:04:37.607 "ublk_recover_disk", 00:04:37.607 "ublk_get_disks", 00:04:37.607 "ublk_stop_disk", 00:04:37.607 "ublk_start_disk", 00:04:37.607 "ublk_destroy_target", 00:04:37.607 "ublk_create_target", 00:04:37.607 "virtio_blk_create_transport", 00:04:37.607 "virtio_blk_get_transports", 00:04:37.607 "vhost_controller_set_coalescing", 00:04:37.607 "vhost_get_controllers", 00:04:37.607 "vhost_delete_controller", 00:04:37.607 "vhost_create_blk_controller", 00:04:37.607 "vhost_scsi_controller_remove_target", 00:04:37.607 "vhost_scsi_controller_add_target", 00:04:37.607 "vhost_start_scsi_controller", 00:04:37.607 "vhost_create_scsi_controller", 00:04:37.607 "thread_set_cpumask", 00:04:37.607 "framework_get_governor", 00:04:37.607 "framework_get_scheduler", 00:04:37.607 "framework_set_scheduler", 00:04:37.607 "framework_get_reactors", 00:04:37.607 "thread_get_io_channels", 00:04:37.607 "thread_get_pollers", 00:04:37.607 "thread_get_stats", 00:04:37.607 "framework_monitor_context_switch", 00:04:37.607 "spdk_kill_instance", 00:04:37.607 "log_enable_timestamps", 00:04:37.607 "log_get_flags", 00:04:37.607 "log_clear_flag", 00:04:37.607 "log_set_flag", 00:04:37.607 "log_get_level", 00:04:37.607 "log_set_level", 00:04:37.607 "log_get_print_level", 00:04:37.607 "log_set_print_level", 00:04:37.607 "framework_enable_cpumask_locks", 00:04:37.607 "framework_disable_cpumask_locks", 00:04:37.607 "framework_wait_init", 00:04:37.607 "framework_start_init", 00:04:37.607 "scsi_get_devices", 00:04:37.607 "bdev_get_histogram", 00:04:37.607 "bdev_enable_histogram", 00:04:37.607 "bdev_set_qos_limit", 00:04:37.607 "bdev_set_qd_sampling_period", 00:04:37.607 "bdev_get_bdevs", 00:04:37.607 "bdev_reset_iostat", 00:04:37.607 "bdev_get_iostat", 00:04:37.607 "bdev_examine", 00:04:37.607 "bdev_wait_for_examine", 00:04:37.607 "bdev_set_options", 00:04:37.607 "notify_get_notifications", 00:04:37.607 "notify_get_types", 00:04:37.607 "accel_get_stats", 00:04:37.607 "accel_set_options", 00:04:37.607 "accel_set_driver", 00:04:37.607 "accel_crypto_key_destroy", 00:04:37.607 "accel_crypto_keys_get", 00:04:37.607 "accel_crypto_key_create", 00:04:37.607 "accel_assign_opc", 00:04:37.607 "accel_get_module_info", 00:04:37.607 "accel_get_opc_assignments", 00:04:37.607 "vmd_rescan", 00:04:37.607 "vmd_remove_device", 00:04:37.607 "vmd_enable", 00:04:37.607 "sock_get_default_impl", 00:04:37.607 "sock_set_default_impl", 00:04:37.607 "sock_impl_set_options", 00:04:37.607 "sock_impl_get_options", 00:04:37.607 "iobuf_get_stats", 00:04:37.607 "iobuf_set_options", 00:04:37.607 "framework_get_pci_devices", 00:04:37.607 "framework_get_config", 00:04:37.607 "framework_get_subsystems", 00:04:37.607 "trace_get_info", 00:04:37.607 "trace_get_tpoint_group_mask", 00:04:37.607 "trace_disable_tpoint_group", 00:04:37.607 "trace_enable_tpoint_group", 00:04:37.607 "trace_clear_tpoint_mask", 00:04:37.607 "trace_set_tpoint_mask", 00:04:37.607 "keyring_get_keys", 00:04:37.607 "spdk_get_version", 00:04:37.607 "rpc_get_methods" 00:04:37.607 ] 00:04:37.866 19:43:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:37.866 19:43:06 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:37.866 19:43:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.866 19:43:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:37.866 19:43:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59883 00:04:37.866 19:43:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59883 ']' 00:04:37.866 19:43:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59883 00:04:37.866 19:43:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:37.866 19:43:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.866 19:43:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59883 00:04:37.866 19:43:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.866 killing process with pid 59883 00:04:37.866 19:43:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.866 19:43:06 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59883' 00:04:37.866 19:43:06 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59883 00:04:37.866 19:43:06 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59883 00:04:38.124 00:04:38.124 real 0m1.915s 00:04:38.124 user 0m3.640s 00:04:38.124 sys 0m0.475s 00:04:38.124 19:43:06 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.125 19:43:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.125 ************************************ 00:04:38.125 END TEST spdkcli_tcp 00:04:38.125 ************************************ 00:04:38.125 19:43:06 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.125 19:43:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.125 19:43:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.125 19:43:06 -- common/autotest_common.sh@10 -- # set +x 00:04:38.125 ************************************ 00:04:38.125 START TEST dpdk_mem_utility 00:04:38.125 ************************************ 00:04:38.125 19:43:06 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.383 * Looking for test storage... 00:04:38.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:38.383 19:43:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:38.383 19:43:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59974 00:04:38.383 19:43:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59974 00:04:38.383 19:43:06 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59974 ']' 00:04:38.383 19:43:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.383 19:43:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.383 19:43:06 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.383 19:43:06 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.383 19:43:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.383 19:43:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.383 [2024-07-24 19:43:06.903500] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:38.383 [2024-07-24 19:43:06.903598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59974 ] 00:04:38.383 [2024-07-24 19:43:07.037035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.640 [2024-07-24 19:43:07.147150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.640 [2024-07-24 19:43:07.191442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:39.205 19:43:07 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.205 19:43:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:39.205 19:43:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:39.205 19:43:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:39.205 19:43:07 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.205 19:43:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.205 { 00:04:39.205 "filename": "/tmp/spdk_mem_dump.txt" 00:04:39.205 } 00:04:39.205 19:43:07 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.206 19:43:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:39.465 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:39.465 1 heaps totaling size 814.000000 MiB 00:04:39.465 size: 814.000000 MiB heap id: 0 00:04:39.465 end heaps---------- 00:04:39.465 8 mempools totaling size 598.116089 MiB 00:04:39.465 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:39.465 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:39.465 size: 84.521057 MiB name: bdev_io_59974 00:04:39.465 size: 51.011292 MiB name: evtpool_59974 00:04:39.465 size: 50.003479 MiB name: msgpool_59974 00:04:39.465 size: 21.763794 MiB name: PDU_Pool 00:04:39.465 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:39.466 size: 0.026123 MiB name: Session_Pool 00:04:39.466 end mempools------- 00:04:39.466 6 memzones totaling size 4.142822 MiB 00:04:39.466 size: 1.000366 MiB name: RG_ring_0_59974 00:04:39.466 size: 1.000366 MiB name: RG_ring_1_59974 00:04:39.466 size: 1.000366 MiB name: RG_ring_4_59974 00:04:39.466 size: 1.000366 MiB name: RG_ring_5_59974 00:04:39.466 size: 0.125366 MiB name: RG_ring_2_59974 00:04:39.466 size: 0.015991 MiB name: RG_ring_3_59974 00:04:39.466 end memzones------- 00:04:39.466 19:43:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:39.466 heap id: 0 total size: 814.000000 MiB number of busy elements: 304 number of free elements: 15 00:04:39.466 list of free elements. size: 12.471191 MiB 00:04:39.466 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:39.466 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:39.466 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:39.466 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:39.466 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:39.466 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:39.466 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:39.466 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:39.466 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:39.466 element at address: 0x20001aa00000 with size: 0.567871 MiB 00:04:39.466 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:39.466 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:39.466 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:39.466 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:39.466 element at address: 0x200003a00000 with size: 0.348572 MiB 00:04:39.466 list of standard malloc elements. size: 199.266235 MiB 00:04:39.466 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:39.466 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:39.466 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:39.466 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:39.466 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:39.466 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:39.466 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:39.466 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:39.466 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:39.466 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:39.466 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:39.466 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:39.466 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:39.466 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:39.466 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:39.466 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:39.466 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:39.467 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:39.467 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:39.467 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:39.467 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:39.468 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:39.468 list of memzone associated elements. size: 602.262573 MiB 00:04:39.468 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:39.468 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:39.468 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:39.468 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:39.468 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:39.468 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59974_0 00:04:39.468 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:39.468 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59974_0 00:04:39.468 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:39.468 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59974_0 00:04:39.468 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:39.468 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:39.468 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:39.468 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:39.468 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:39.468 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59974 00:04:39.468 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:39.468 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59974 00:04:39.468 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:39.468 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59974 00:04:39.468 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:39.468 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:39.468 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:39.468 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:39.468 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:39.468 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:39.468 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:39.468 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:39.468 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:39.468 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59974 00:04:39.468 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:39.468 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59974 00:04:39.468 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:39.468 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59974 00:04:39.468 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:39.468 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59974 00:04:39.468 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:39.468 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59974 00:04:39.468 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:39.468 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:39.468 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:39.468 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:39.468 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:39.468 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:39.468 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:39.468 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59974 00:04:39.468 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:39.468 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:39.468 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:39.468 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:39.468 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:39.468 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59974 00:04:39.468 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:39.468 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:39.468 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:39.468 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59974 00:04:39.468 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:39.468 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59974 00:04:39.468 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:39.468 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:39.468 19:43:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:39.468 19:43:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59974 00:04:39.468 19:43:08 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59974 ']' 00:04:39.468 19:43:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59974 00:04:39.468 19:43:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:39.468 19:43:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.468 19:43:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59974 00:04:39.468 killing process with pid 59974 00:04:39.468 19:43:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.468 19:43:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.468 19:43:08 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59974' 00:04:39.468 19:43:08 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59974 00:04:39.468 19:43:08 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59974 00:04:40.037 00:04:40.037 real 0m1.637s 00:04:40.037 user 0m1.796s 00:04:40.037 sys 0m0.396s 00:04:40.037 19:43:08 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.037 ************************************ 00:04:40.037 19:43:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.037 END TEST dpdk_mem_utility 00:04:40.037 ************************************ 00:04:40.037 19:43:08 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:40.037 19:43:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.037 19:43:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.037 19:43:08 -- common/autotest_common.sh@10 -- # set +x 00:04:40.037 ************************************ 00:04:40.037 START TEST event 00:04:40.037 ************************************ 00:04:40.037 19:43:08 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:40.037 * Looking for test storage... 00:04:40.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:40.037 19:43:08 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:40.037 19:43:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:40.037 19:43:08 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.037 19:43:08 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:40.037 19:43:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.037 19:43:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.037 ************************************ 00:04:40.037 START TEST event_perf 00:04:40.037 ************************************ 00:04:40.037 19:43:08 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.037 Running I/O for 1 seconds...[2024-07-24 19:43:08.577459] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:40.037 [2024-07-24 19:43:08.577577] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60046 ] 00:04:40.296 [2024-07-24 19:43:08.720206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:40.296 [2024-07-24 19:43:08.836843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.296 [2024-07-24 19:43:08.836977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.296 [2024-07-24 19:43:08.837175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.296 [2024-07-24 19:43:08.837188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.671 Running I/O for 1 seconds... 00:04:41.671 lcore 0: 108908 00:04:41.671 lcore 1: 108908 00:04:41.671 lcore 2: 108907 00:04:41.671 lcore 3: 108905 00:04:41.671 done. 00:04:41.671 00:04:41.671 real 0m1.369s 00:04:41.671 user 0m4.166s 00:04:41.671 sys 0m0.070s 00:04:41.671 19:43:09 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.671 ************************************ 00:04:41.671 END TEST event_perf 00:04:41.671 ************************************ 00:04:41.671 19:43:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:41.671 19:43:09 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:41.671 19:43:09 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:41.672 19:43:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.672 19:43:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.672 ************************************ 00:04:41.672 START TEST event_reactor 00:04:41.672 ************************************ 00:04:41.672 19:43:09 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:41.672 [2024-07-24 19:43:10.003812] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:41.672 [2024-07-24 19:43:10.003932] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60084 ] 00:04:41.672 [2024-07-24 19:43:10.144902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.672 [2024-07-24 19:43:10.253855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.049 test_start 00:04:43.049 oneshot 00:04:43.049 tick 100 00:04:43.049 tick 100 00:04:43.049 tick 250 00:04:43.049 tick 100 00:04:43.049 tick 100 00:04:43.049 tick 100 00:04:43.049 tick 250 00:04:43.049 tick 500 00:04:43.049 tick 100 00:04:43.049 tick 100 00:04:43.049 tick 250 00:04:43.049 tick 100 00:04:43.050 tick 100 00:04:43.050 test_end 00:04:43.050 00:04:43.050 real 0m1.433s 00:04:43.050 user 0m1.252s 00:04:43.050 sys 0m0.069s 00:04:43.050 19:43:11 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.050 19:43:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:43.050 ************************************ 00:04:43.050 END TEST event_reactor 00:04:43.050 ************************************ 00:04:43.050 19:43:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.050 19:43:11 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:43.050 19:43:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.050 19:43:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.050 ************************************ 00:04:43.050 START TEST event_reactor_perf 00:04:43.050 ************************************ 00:04:43.050 19:43:11 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.050 [2024-07-24 19:43:11.504757] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:43.050 [2024-07-24 19:43:11.504914] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60120 ] 00:04:43.050 [2024-07-24 19:43:11.650991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.308 [2024-07-24 19:43:11.831324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.745 test_start 00:04:44.745 test_end 00:04:44.745 Performance: 373661 events per second 00:04:44.745 00:04:44.745 real 0m1.439s 00:04:44.745 user 0m1.248s 00:04:44.745 sys 0m0.081s 00:04:44.745 19:43:12 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.745 19:43:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.745 ************************************ 00:04:44.745 END TEST event_reactor_perf 00:04:44.745 ************************************ 00:04:44.745 19:43:12 event -- event/event.sh@49 -- # uname -s 00:04:44.745 19:43:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:44.745 19:43:12 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:44.745 19:43:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.745 19:43:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.745 19:43:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.745 ************************************ 00:04:44.745 START TEST event_scheduler 00:04:44.745 ************************************ 00:04:44.745 19:43:12 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:44.745 * Looking for test storage... 00:04:44.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:44.745 19:43:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:44.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.745 19:43:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60181 00:04:44.745 19:43:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.745 19:43:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60181 00:04:44.745 19:43:13 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60181 ']' 00:04:44.745 19:43:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:44.745 19:43:13 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.745 19:43:13 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.745 19:43:13 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.745 19:43:13 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.745 19:43:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.745 [2024-07-24 19:43:13.133040] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:44.745 [2024-07-24 19:43:13.133192] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60181 ] 00:04:44.745 [2024-07-24 19:43:13.288231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.003 [2024-07-24 19:43:13.484478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.003 [2024-07-24 19:43:13.484694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.003 [2024-07-24 19:43:13.484863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.003 [2024-07-24 19:43:13.484867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.569 19:43:14 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.569 19:43:14 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:45.569 19:43:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:45.569 19:43:14 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.569 19:43:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.569 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:45.569 POWER: Cannot set governor of lcore 0 to userspace 00:04:45.569 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:45.569 POWER: Cannot set governor of lcore 0 to performance 00:04:45.569 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:45.569 POWER: Cannot set governor of lcore 0 to userspace 00:04:45.569 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:45.569 POWER: Cannot set governor of lcore 0 to userspace 00:04:45.569 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:45.569 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:45.569 POWER: Unable to set Power Management Environment for lcore 0 00:04:45.569 [2024-07-24 19:43:14.172505] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:45.569 [2024-07-24 19:43:14.172521] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:45.569 [2024-07-24 19:43:14.172530] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:45.569 [2024-07-24 19:43:14.172543] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:45.569 [2024-07-24 19:43:14.172551] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:45.569 [2024-07-24 19:43:14.172559] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:45.569 19:43:14 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.569 19:43:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:45.569 19:43:14 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.569 19:43:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.827 [2024-07-24 19:43:14.268583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:45.827 [2024-07-24 19:43:14.322647] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:45.827 19:43:14 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.828 19:43:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:45.828 19:43:14 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.828 19:43:14 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.828 ************************************ 00:04:45.828 START TEST scheduler_create_thread 00:04:45.828 ************************************ 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.828 2 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.828 3 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.828 4 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.828 5 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.828 6 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.828 7 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.828 8 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.828 9 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.828 10 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.828 19:43:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.729 19:43:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.729 19:43:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:47.729 19:43:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:47.729 19:43:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.729 19:43:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.294 ************************************ 00:04:48.294 END TEST scheduler_create_thread 00:04:48.294 ************************************ 00:04:48.294 19:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.294 00:04:48.294 real 0m2.615s 00:04:48.294 user 0m0.019s 00:04:48.294 sys 0m0.006s 00:04:48.294 19:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.294 19:43:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.552 19:43:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:48.552 19:43:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60181 00:04:48.552 19:43:16 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60181 ']' 00:04:48.552 19:43:16 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60181 00:04:48.552 19:43:16 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:48.552 19:43:17 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.552 19:43:17 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60181 00:04:48.552 killing process with pid 60181 00:04:48.552 19:43:17 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:48.552 19:43:17 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:48.552 19:43:17 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60181' 00:04:48.552 19:43:17 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60181 00:04:48.552 19:43:17 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60181 00:04:48.809 [2024-07-24 19:43:17.427880] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:49.376 00:04:49.376 real 0m4.804s 00:04:49.376 user 0m8.721s 00:04:49.376 sys 0m0.490s 00:04:49.376 19:43:17 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.376 19:43:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.376 ************************************ 00:04:49.376 END TEST event_scheduler 00:04:49.376 ************************************ 00:04:49.376 19:43:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:49.376 19:43:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:49.376 19:43:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.376 19:43:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.376 19:43:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.376 ************************************ 00:04:49.376 START TEST app_repeat 00:04:49.376 ************************************ 00:04:49.376 19:43:17 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:49.376 19:43:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.376 19:43:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.376 19:43:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:49.377 19:43:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.377 19:43:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:49.377 19:43:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:49.377 19:43:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:49.377 19:43:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60281 00:04:49.377 19:43:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.377 Process app_repeat pid: 60281 00:04:49.377 spdk_app_start Round 0 00:04:49.377 19:43:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60281' 00:04:49.377 19:43:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.377 19:43:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:49.377 19:43:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60281 /var/tmp/spdk-nbd.sock 00:04:49.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.377 19:43:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60281 ']' 00:04:49.377 19:43:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.377 19:43:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.377 19:43:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.377 19:43:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.377 19:43:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.377 19:43:17 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:49.377 [2024-07-24 19:43:17.894257] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:04:49.377 [2024-07-24 19:43:17.894374] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60281 ] 00:04:49.634 [2024-07-24 19:43:18.042266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.634 [2024-07-24 19:43:18.194579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.635 [2024-07-24 19:43:18.194591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.635 [2024-07-24 19:43:18.239639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:50.567 19:43:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.567 19:43:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:50.567 19:43:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.825 Malloc0 00:04:50.825 19:43:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.083 Malloc1 00:04:51.083 19:43:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.083 19:43:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.083 19:43:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.083 19:43:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.083 19:43:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.083 19:43:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.084 19:43:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.084 19:43:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.084 19:43:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.084 19:43:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.084 19:43:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.084 19:43:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.084 19:43:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.084 19:43:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.084 19:43:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.084 19:43:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.343 /dev/nbd0 00:04:51.343 19:43:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.343 19:43:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.343 19:43:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:51.343 19:43:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:51.343 19:43:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:51.344 19:43:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:51.344 19:43:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:51.344 19:43:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:51.344 19:43:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:51.344 19:43:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:51.344 19:43:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.344 1+0 records in 00:04:51.344 1+0 records out 00:04:51.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281179 s, 14.6 MB/s 00:04:51.344 19:43:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.344 19:43:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:51.344 19:43:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.344 19:43:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:51.344 19:43:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:51.344 19:43:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.344 19:43:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.344 19:43:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.909 /dev/nbd1 00:04:51.909 19:43:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.909 19:43:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.909 1+0 records in 00:04:51.909 1+0 records out 00:04:51.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555258 s, 7.4 MB/s 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:51.909 19:43:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:51.909 19:43:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.909 19:43:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.909 19:43:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.909 19:43:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.909 19:43:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.167 { 00:04:52.167 "nbd_device": "/dev/nbd0", 00:04:52.167 "bdev_name": "Malloc0" 00:04:52.167 }, 00:04:52.167 { 00:04:52.167 "nbd_device": "/dev/nbd1", 00:04:52.167 "bdev_name": "Malloc1" 00:04:52.167 } 00:04:52.167 ]' 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.167 { 00:04:52.167 "nbd_device": "/dev/nbd0", 00:04:52.167 "bdev_name": "Malloc0" 00:04:52.167 }, 00:04:52.167 { 00:04:52.167 "nbd_device": "/dev/nbd1", 00:04:52.167 "bdev_name": "Malloc1" 00:04:52.167 } 00:04:52.167 ]' 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.167 /dev/nbd1' 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.167 /dev/nbd1' 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.167 256+0 records in 00:04:52.167 256+0 records out 00:04:52.167 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00604009 s, 174 MB/s 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.167 256+0 records in 00:04:52.167 256+0 records out 00:04:52.167 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308047 s, 34.0 MB/s 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.167 256+0 records in 00:04:52.167 256+0 records out 00:04:52.167 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260529 s, 40.2 MB/s 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.167 19:43:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.425 19:43:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.425 19:43:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.425 19:43:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.425 19:43:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.426 19:43:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.426 19:43:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.426 19:43:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.426 19:43:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.426 19:43:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.426 19:43:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.684 19:43:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.684 19:43:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.684 19:43:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.684 19:43:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.684 19:43:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.684 19:43:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.684 19:43:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.684 19:43:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.684 19:43:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.684 19:43:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.684 19:43:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.942 19:43:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.942 19:43:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.942 19:43:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.942 19:43:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.942 19:43:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.942 19:43:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.942 19:43:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.942 19:43:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.942 19:43:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.942 19:43:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.942 19:43:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.942 19:43:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.942 19:43:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.199 19:43:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.457 [2024-07-24 19:43:21.997439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.714 [2024-07-24 19:43:22.126605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.714 [2024-07-24 19:43:22.126622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.714 [2024-07-24 19:43:22.172088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:53.714 [2024-07-24 19:43:22.172174] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.714 [2024-07-24 19:43:22.172188] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.245 spdk_app_start Round 1 00:04:56.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.245 19:43:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.245 19:43:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:56.245 19:43:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60281 /var/tmp/spdk-nbd.sock 00:04:56.245 19:43:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60281 ']' 00:04:56.245 19:43:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.245 19:43:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.245 19:43:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.245 19:43:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.245 19:43:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.502 19:43:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.503 19:43:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:56.503 19:43:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.762 Malloc0 00:04:56.762 19:43:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.020 Malloc1 00:04:57.020 19:43:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.020 19:43:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.276 /dev/nbd0 00:04:57.276 19:43:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:57.276 19:43:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.276 1+0 records in 00:04:57.276 1+0 records out 00:04:57.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500227 s, 8.2 MB/s 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:57.276 19:43:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:57.276 19:43:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.276 19:43:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.276 19:43:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.534 /dev/nbd1 00:04:57.534 19:43:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.534 19:43:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.534 19:43:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:57.534 19:43:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:57.534 19:43:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:57.534 19:43:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:57.534 19:43:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:57.534 19:43:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:57.534 19:43:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:57.534 19:43:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:57.534 19:43:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.534 1+0 records in 00:04:57.534 1+0 records out 00:04:57.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342809 s, 11.9 MB/s 00:04:57.534 19:43:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.793 19:43:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:57.793 19:43:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:57.793 19:43:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:57.793 19:43:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:57.793 19:43:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.793 19:43:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.793 19:43:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.793 19:43:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.793 19:43:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.050 19:43:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.050 { 00:04:58.050 "nbd_device": "/dev/nbd0", 00:04:58.050 "bdev_name": "Malloc0" 00:04:58.050 }, 00:04:58.050 { 00:04:58.050 "nbd_device": "/dev/nbd1", 00:04:58.051 "bdev_name": "Malloc1" 00:04:58.051 } 00:04:58.051 ]' 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.051 { 00:04:58.051 "nbd_device": "/dev/nbd0", 00:04:58.051 "bdev_name": "Malloc0" 00:04:58.051 }, 00:04:58.051 { 00:04:58.051 "nbd_device": "/dev/nbd1", 00:04:58.051 "bdev_name": "Malloc1" 00:04:58.051 } 00:04:58.051 ]' 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.051 /dev/nbd1' 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.051 /dev/nbd1' 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.051 256+0 records in 00:04:58.051 256+0 records out 00:04:58.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00701794 s, 149 MB/s 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.051 256+0 records in 00:04:58.051 256+0 records out 00:04:58.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312466 s, 33.6 MB/s 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.051 256+0 records in 00:04:58.051 256+0 records out 00:04:58.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326218 s, 32.1 MB/s 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.051 19:43:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.309 19:43:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.566 19:43:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.566 19:43:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.566 19:43:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.566 19:43:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.566 19:43:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.566 19:43:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.566 19:43:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.566 19:43:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.566 19:43:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.566 19:43:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.824 19:43:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.824 19:43:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.824 19:43:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.824 19:43:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.824 19:43:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.824 19:43:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.824 19:43:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.824 19:43:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.824 19:43:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.824 19:43:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.083 19:43:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.083 19:43:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.083 19:43:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.083 19:43:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.083 19:43:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.083 19:43:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.083 19:43:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.083 19:43:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.083 19:43:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.083 19:43:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.083 19:43:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.083 19:43:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.083 19:43:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.341 19:43:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.599 [2024-07-24 19:43:28.030063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.599 [2024-07-24 19:43:28.153904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.599 [2024-07-24 19:43:28.153891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.599 [2024-07-24 19:43:28.208442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:59.599 [2024-07-24 19:43:28.208560] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.599 [2024-07-24 19:43:28.208579] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.882 spdk_app_start Round 2 00:05:02.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.882 19:43:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.882 19:43:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:02.882 19:43:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60281 /var/tmp/spdk-nbd.sock 00:05:02.882 19:43:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60281 ']' 00:05:02.882 19:43:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.882 19:43:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.882 19:43:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.882 19:43:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.882 19:43:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.882 19:43:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.882 19:43:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:02.882 19:43:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.140 Malloc0 00:05:03.140 19:43:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.398 Malloc1 00:05:03.398 19:43:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.398 19:43:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.657 /dev/nbd0 00:05:03.657 19:43:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.657 19:43:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.657 1+0 records in 00:05:03.657 1+0 records out 00:05:03.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035526 s, 11.5 MB/s 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:03.657 19:43:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:03.657 19:43:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.657 19:43:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.657 19:43:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.915 /dev/nbd1 00:05:03.915 19:43:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.915 19:43:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.915 19:43:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:03.915 19:43:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:03.915 19:43:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:03.915 19:43:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:03.915 19:43:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:03.915 19:43:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:03.915 19:43:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:03.915 19:43:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:03.915 19:43:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.915 1+0 records in 00:05:03.915 1+0 records out 00:05:03.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038528 s, 10.6 MB/s 00:05:03.915 19:43:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.173 19:43:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:04.173 19:43:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.173 19:43:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:04.173 19:43:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:04.173 19:43:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.173 19:43:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.173 19:43:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.173 19:43:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.173 19:43:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.432 { 00:05:04.432 "nbd_device": "/dev/nbd0", 00:05:04.432 "bdev_name": "Malloc0" 00:05:04.432 }, 00:05:04.432 { 00:05:04.432 "nbd_device": "/dev/nbd1", 00:05:04.432 "bdev_name": "Malloc1" 00:05:04.432 } 00:05:04.432 ]' 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.432 { 00:05:04.432 "nbd_device": "/dev/nbd0", 00:05:04.432 "bdev_name": "Malloc0" 00:05:04.432 }, 00:05:04.432 { 00:05:04.432 "nbd_device": "/dev/nbd1", 00:05:04.432 "bdev_name": "Malloc1" 00:05:04.432 } 00:05:04.432 ]' 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.432 /dev/nbd1' 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.432 /dev/nbd1' 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.432 256+0 records in 00:05:04.432 256+0 records out 00:05:04.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00682277 s, 154 MB/s 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.432 256+0 records in 00:05:04.432 256+0 records out 00:05:04.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238238 s, 44.0 MB/s 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.432 256+0 records in 00:05:04.432 256+0 records out 00:05:04.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248646 s, 42.2 MB/s 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.432 19:43:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.432 19:43:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.432 19:43:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.432 19:43:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.432 19:43:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.432 19:43:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.432 19:43:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.432 19:43:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.432 19:43:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.999 19:43:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.256 19:43:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.256 19:43:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.256 19:43:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.256 19:43:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.257 19:43:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.257 19:43:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.257 19:43:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.257 19:43:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.257 19:43:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.257 19:43:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.257 19:43:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.257 19:43:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.514 19:43:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.514 19:43:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.773 [2024-07-24 19:43:34.341083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.032 [2024-07-24 19:43:34.455067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.032 [2024-07-24 19:43:34.455071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.032 [2024-07-24 19:43:34.500805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:06.032 [2024-07-24 19:43:34.500894] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.032 [2024-07-24 19:43:34.500908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.565 19:43:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60281 /var/tmp/spdk-nbd.sock 00:05:08.565 19:43:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60281 ']' 00:05:08.565 19:43:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.565 19:43:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.565 19:43:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.565 19:43:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.565 19:43:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.824 19:43:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.824 19:43:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:08.824 19:43:37 event.app_repeat -- event/event.sh@39 -- # killprocess 60281 00:05:08.824 19:43:37 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60281 ']' 00:05:08.824 19:43:37 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60281 00:05:08.824 19:43:37 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:08.824 19:43:37 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.824 19:43:37 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60281 00:05:08.824 killing process with pid 60281 00:05:08.824 19:43:37 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.824 19:43:37 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.824 19:43:37 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60281' 00:05:08.824 19:43:37 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60281 00:05:08.824 19:43:37 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60281 00:05:09.390 spdk_app_start is called in Round 0. 00:05:09.390 Shutdown signal received, stop current app iteration 00:05:09.390 Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 reinitialization... 00:05:09.390 spdk_app_start is called in Round 1. 00:05:09.390 Shutdown signal received, stop current app iteration 00:05:09.390 Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 reinitialization... 00:05:09.390 spdk_app_start is called in Round 2. 00:05:09.390 Shutdown signal received, stop current app iteration 00:05:09.390 Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 reinitialization... 00:05:09.390 spdk_app_start is called in Round 3. 00:05:09.390 Shutdown signal received, stop current app iteration 00:05:09.390 19:43:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:09.390 19:43:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:09.390 ************************************ 00:05:09.390 END TEST app_repeat 00:05:09.390 ************************************ 00:05:09.390 00:05:09.390 real 0m19.958s 00:05:09.390 user 0m44.358s 00:05:09.390 sys 0m3.496s 00:05:09.390 19:43:37 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.390 19:43:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.390 19:43:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:09.390 19:43:37 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:09.390 19:43:37 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.390 19:43:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.390 19:43:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.390 ************************************ 00:05:09.390 START TEST cpu_locks 00:05:09.390 ************************************ 00:05:09.390 19:43:37 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:09.390 * Looking for test storage... 00:05:09.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:09.390 19:43:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:09.390 19:43:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:09.390 19:43:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:09.390 19:43:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:09.390 19:43:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.390 19:43:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.390 19:43:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.390 ************************************ 00:05:09.390 START TEST default_locks 00:05:09.390 ************************************ 00:05:09.390 19:43:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:09.390 19:43:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60725 00:05:09.390 19:43:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.390 19:43:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60725 00:05:09.390 19:43:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60725 ']' 00:05:09.390 19:43:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.390 19:43:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.390 19:43:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.390 19:43:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.390 19:43:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.390 [2024-07-24 19:43:38.048865] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:09.390 [2024-07-24 19:43:38.049048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60725 ] 00:05:09.648 [2024-07-24 19:43:38.198134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.906 [2024-07-24 19:43:38.381948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.906 [2024-07-24 19:43:38.468439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:10.841 19:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.841 19:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:10.841 19:43:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60725 00:05:10.841 19:43:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.841 19:43:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60725 00:05:11.099 19:43:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60725 00:05:11.099 19:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60725 ']' 00:05:11.099 19:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60725 00:05:11.099 19:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:11.099 19:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.099 19:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60725 00:05:11.099 19:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.099 killing process with pid 60725 00:05:11.099 19:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.099 19:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60725' 00:05:11.099 19:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60725 00:05:11.099 19:43:39 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60725 00:05:12.033 19:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60725 00:05:12.033 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60725 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60725 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60725 ']' 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.034 ERROR: process (pid: 60725) is no longer running 00:05:12.034 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60725) - No such process 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.034 00:05:12.034 real 0m2.447s 00:05:12.034 user 0m2.564s 00:05:12.034 sys 0m0.832s 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.034 19:43:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.034 ************************************ 00:05:12.034 END TEST default_locks 00:05:12.034 ************************************ 00:05:12.034 19:43:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:12.034 19:43:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.034 19:43:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.034 19:43:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.034 ************************************ 00:05:12.034 START TEST default_locks_via_rpc 00:05:12.034 ************************************ 00:05:12.034 19:43:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:12.034 19:43:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60782 00:05:12.034 19:43:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.034 19:43:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60782 00:05:12.034 19:43:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60782 ']' 00:05:12.034 19:43:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.034 19:43:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.034 19:43:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.034 19:43:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.034 19:43:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.034 [2024-07-24 19:43:40.523146] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:12.034 [2024-07-24 19:43:40.523260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60782 ] 00:05:12.034 [2024-07-24 19:43:40.663805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.291 [2024-07-24 19:43:40.843020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.291 [2024-07-24 19:43:40.934259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:12.892 19:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.892 19:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:12.892 19:43:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:12.892 19:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.892 19:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.150 19:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.150 19:43:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:13.150 19:43:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.150 19:43:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.150 19:43:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.150 19:43:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.150 19:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.150 19:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.150 19:43:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.150 19:43:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60782 00:05:13.150 19:43:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60782 00:05:13.150 19:43:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.408 19:43:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60782 00:05:13.408 19:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60782 ']' 00:05:13.408 19:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60782 00:05:13.666 19:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:13.666 19:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.666 19:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60782 00:05:13.666 killing process with pid 60782 00:05:13.666 19:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.666 19:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.666 19:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60782' 00:05:13.666 19:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60782 00:05:13.666 19:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60782 00:05:13.925 00:05:13.925 real 0m2.001s 00:05:13.925 user 0m2.023s 00:05:13.925 sys 0m0.791s 00:05:13.925 19:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.925 ************************************ 00:05:13.925 END TEST default_locks_via_rpc 00:05:13.925 ************************************ 00:05:13.925 19:43:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.925 19:43:42 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:13.925 19:43:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.925 19:43:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.925 19:43:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.925 ************************************ 00:05:13.925 START TEST non_locking_app_on_locked_coremask 00:05:13.925 ************************************ 00:05:13.925 19:43:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:13.925 19:43:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60833 00:05:13.925 19:43:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60833 /var/tmp/spdk.sock 00:05:13.925 19:43:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60833 ']' 00:05:13.925 19:43:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.925 19:43:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.925 19:43:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.925 19:43:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.925 19:43:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.925 19:43:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.925 [2024-07-24 19:43:42.566811] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:13.925 [2024-07-24 19:43:42.566903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60833 ] 00:05:14.185 [2024-07-24 19:43:42.700551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.185 [2024-07-24 19:43:42.846204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.444 [2024-07-24 19:43:42.896631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:15.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.061 19:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.061 19:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:15.061 19:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60849 00:05:15.061 19:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60849 /var/tmp/spdk2.sock 00:05:15.061 19:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60849 ']' 00:05:15.061 19:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:15.061 19:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.061 19:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.061 19:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.061 19:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.061 19:43:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.061 [2024-07-24 19:43:43.625534] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:15.061 [2024-07-24 19:43:43.625776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60849 ] 00:05:15.319 [2024-07-24 19:43:43.782013] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.319 [2024-07-24 19:43:43.782094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.577 [2024-07-24 19:43:44.168862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.834 [2024-07-24 19:43:44.342280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:16.399 19:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.399 19:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:16.399 19:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60833 00:05:16.399 19:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60833 00:05:16.399 19:43:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.334 19:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60833 00:05:17.334 19:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60833 ']' 00:05:17.334 19:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60833 00:05:17.334 19:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:17.334 19:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.334 19:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60833 00:05:17.334 killing process with pid 60833 00:05:17.334 19:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.334 19:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.334 19:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60833' 00:05:17.334 19:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60833 00:05:17.334 19:43:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60833 00:05:18.708 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60849 00:05:18.708 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60849 ']' 00:05:18.708 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60849 00:05:18.708 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:18.708 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.708 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60849 00:05:18.708 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.708 killing process with pid 60849 00:05:18.708 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.708 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60849' 00:05:18.708 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60849 00:05:18.708 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60849 00:05:19.643 00:05:19.643 real 0m5.432s 00:05:19.643 user 0m5.783s 00:05:19.643 sys 0m1.344s 00:05:19.643 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.643 ************************************ 00:05:19.643 END TEST non_locking_app_on_locked_coremask 00:05:19.643 ************************************ 00:05:19.643 19:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.643 19:43:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:19.643 19:43:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.643 19:43:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.643 19:43:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.643 ************************************ 00:05:19.643 START TEST locking_app_on_unlocked_coremask 00:05:19.643 ************************************ 00:05:19.643 19:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:19.643 19:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60927 00:05:19.643 19:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60927 /var/tmp/spdk.sock 00:05:19.643 19:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:19.644 19:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60927 ']' 00:05:19.644 19:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.644 19:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.644 19:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.644 19:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.644 19:43:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.644 [2024-07-24 19:43:48.078556] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:19.644 [2024-07-24 19:43:48.078704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60927 ] 00:05:19.644 [2024-07-24 19:43:48.223976] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.644 [2024-07-24 19:43:48.224081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.901 [2024-07-24 19:43:48.415573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.901 [2024-07-24 19:43:48.513145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:20.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.836 19:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.836 19:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:20.836 19:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60949 00:05:20.836 19:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:20.836 19:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60949 /var/tmp/spdk2.sock 00:05:20.836 19:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60949 ']' 00:05:20.836 19:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.836 19:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.836 19:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.836 19:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.836 19:43:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.836 [2024-07-24 19:43:49.213593] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:20.836 [2024-07-24 19:43:49.215209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60949 ] 00:05:20.836 [2024-07-24 19:43:49.371281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.111 [2024-07-24 19:43:49.653341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.368 [2024-07-24 19:43:49.788426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:21.933 19:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.933 19:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:21.933 19:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60949 00:05:21.933 19:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60949 00:05:21.933 19:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.867 19:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60927 00:05:22.867 19:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60927 ']' 00:05:22.867 19:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60927 00:05:22.867 19:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:22.867 19:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.867 19:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60927 00:05:22.867 killing process with pid 60927 00:05:22.867 19:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.867 19:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.867 19:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60927' 00:05:22.867 19:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60927 00:05:22.867 19:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60927 00:05:24.252 19:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60949 00:05:24.252 19:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60949 ']' 00:05:24.252 19:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60949 00:05:24.252 19:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:24.252 19:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.252 19:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60949 00:05:24.252 killing process with pid 60949 00:05:24.252 19:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.252 19:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.252 19:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60949' 00:05:24.252 19:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60949 00:05:24.252 19:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60949 00:05:24.817 00:05:24.817 real 0m5.463s 00:05:24.817 user 0m5.764s 00:05:24.817 sys 0m1.520s 00:05:24.817 19:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.817 19:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.817 ************************************ 00:05:24.817 END TEST locking_app_on_unlocked_coremask 00:05:24.817 ************************************ 00:05:25.075 19:43:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:25.075 19:43:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.075 19:43:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.075 19:43:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.075 ************************************ 00:05:25.075 START TEST locking_app_on_locked_coremask 00:05:25.075 ************************************ 00:05:25.075 19:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:25.075 19:43:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61027 00:05:25.076 19:43:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.076 19:43:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61027 /var/tmp/spdk.sock 00:05:25.076 19:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61027 ']' 00:05:25.076 19:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.076 19:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.076 19:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.076 19:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.076 19:43:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.076 [2024-07-24 19:43:53.594735] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:25.076 [2024-07-24 19:43:53.595151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61027 ] 00:05:25.076 [2024-07-24 19:43:53.738196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.334 [2024-07-24 19:43:53.906720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.334 [2024-07-24 19:43:53.993592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61043 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61043 /var/tmp/spdk2.sock 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61043 /var/tmp/spdk2.sock 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:26.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61043 /var/tmp/spdk2.sock 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61043 ']' 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.269 19:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.269 [2024-07-24 19:43:54.782392] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:26.269 [2024-07-24 19:43:54.782527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61043 ] 00:05:26.527 [2024-07-24 19:43:54.934631] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61027 has claimed it. 00:05:26.527 [2024-07-24 19:43:54.934748] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.093 ERROR: process (pid: 61043) is no longer running 00:05:27.093 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61043) - No such process 00:05:27.093 19:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.093 19:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:27.093 19:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:27.093 19:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:27.093 19:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:27.093 19:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:27.093 19:43:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61027 00:05:27.093 19:43:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61027 00:05:27.093 19:43:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.687 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61027 00:05:27.687 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61027 ']' 00:05:27.687 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61027 00:05:27.687 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:27.687 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.687 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61027 00:05:27.687 killing process with pid 61027 00:05:27.687 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.687 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.687 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61027' 00:05:27.687 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61027 00:05:27.687 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61027 00:05:28.253 ************************************ 00:05:28.253 END TEST locking_app_on_locked_coremask 00:05:28.253 ************************************ 00:05:28.253 00:05:28.253 real 0m3.195s 00:05:28.253 user 0m3.647s 00:05:28.253 sys 0m0.872s 00:05:28.253 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.253 19:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.253 19:43:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:28.253 19:43:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.253 19:43:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.253 19:43:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.253 ************************************ 00:05:28.253 START TEST locking_overlapped_coremask 00:05:28.253 ************************************ 00:05:28.253 19:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:28.253 19:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61094 00:05:28.253 19:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:28.253 19:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61094 /var/tmp/spdk.sock 00:05:28.253 19:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61094 ']' 00:05:28.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.253 19:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.253 19:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.253 19:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.253 19:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.253 19:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.253 [2024-07-24 19:43:56.824044] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:28.253 [2024-07-24 19:43:56.824799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61094 ] 00:05:28.512 [2024-07-24 19:43:56.963518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.512 [2024-07-24 19:43:57.163058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.512 [2024-07-24 19:43:57.163161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.512 [2024-07-24 19:43:57.163178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.771 [2024-07-24 19:43:57.255970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:29.337 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.337 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:29.337 19:43:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61112 00:05:29.337 19:43:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:29.337 19:43:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61112 /var/tmp/spdk2.sock 00:05:29.337 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:29.337 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61112 /var/tmp/spdk2.sock 00:05:29.337 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:29.337 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.337 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:29.338 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.338 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61112 /var/tmp/spdk2.sock 00:05:29.338 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61112 ']' 00:05:29.338 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.338 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.338 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.338 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.338 19:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.338 [2024-07-24 19:43:57.922984] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:29.338 [2024-07-24 19:43:57.923298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61112 ] 00:05:29.595 [2024-07-24 19:43:58.066671] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61094 has claimed it. 00:05:29.595 [2024-07-24 19:43:58.066755] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:30.160 ERROR: process (pid: 61112) is no longer running 00:05:30.160 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61112) - No such process 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61094 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 61094 ']' 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 61094 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61094 00:05:30.160 killing process with pid 61094 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61094' 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 61094 00:05:30.160 19:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 61094 00:05:30.727 ************************************ 00:05:30.727 END TEST locking_overlapped_coremask 00:05:30.727 ************************************ 00:05:30.727 00:05:30.727 real 0m2.353s 00:05:30.727 user 0m6.187s 00:05:30.727 sys 0m0.610s 00:05:30.727 19:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.727 19:43:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.727 19:43:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:30.727 19:43:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.727 19:43:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.727 19:43:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.727 ************************************ 00:05:30.727 START TEST locking_overlapped_coremask_via_rpc 00:05:30.727 ************************************ 00:05:30.727 19:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:30.727 19:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61152 00:05:30.727 19:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61152 /var/tmp/spdk.sock 00:05:30.727 19:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:30.727 19:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61152 ']' 00:05:30.727 19:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.727 19:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.727 19:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.727 19:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.727 19:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.727 [2024-07-24 19:43:59.229616] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:30.727 [2024-07-24 19:43:59.229698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61152 ] 00:05:30.727 [2024-07-24 19:43:59.365184] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.727 [2024-07-24 19:43:59.365252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.986 [2024-07-24 19:43:59.533553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.986 [2024-07-24 19:43:59.533666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.986 [2024-07-24 19:43:59.533673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.986 [2024-07-24 19:43:59.615529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:31.933 19:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.933 19:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:31.933 19:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:31.933 19:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61174 00:05:31.933 19:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61174 /var/tmp/spdk2.sock 00:05:31.933 19:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61174 ']' 00:05:31.933 19:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.934 19:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.934 19:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.934 19:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.934 19:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.934 [2024-07-24 19:44:00.301079] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:31.934 [2024-07-24 19:44:00.302039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61174 ] 00:05:31.934 [2024-07-24 19:44:00.453432] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.934 [2024-07-24 19:44:00.453502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.192 [2024-07-24 19:44:00.782681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.192 [2024-07-24 19:44:00.782765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:32.192 [2024-07-24 19:44:00.782768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.450 [2024-07-24 19:44:00.949507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.015 [2024-07-24 19:44:01.517140] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61152 has claimed it. 00:05:33.015 request: 00:05:33.015 { 00:05:33.015 "method": "framework_enable_cpumask_locks", 00:05:33.015 "req_id": 1 00:05:33.015 } 00:05:33.015 Got JSON-RPC error response 00:05:33.015 response: 00:05:33.015 { 00:05:33.015 "code": -32603, 00:05:33.015 "message": "Failed to claim CPU core: 2" 00:05:33.015 } 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61152 /var/tmp/spdk.sock 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61152 ']' 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.015 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.273 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.273 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:33.273 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61174 /var/tmp/spdk2.sock 00:05:33.273 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61174 ']' 00:05:33.273 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.273 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.273 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.273 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.273 19:44:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.553 ************************************ 00:05:33.553 END TEST locking_overlapped_coremask_via_rpc 00:05:33.553 ************************************ 00:05:33.553 19:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.553 19:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:33.553 19:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:33.553 19:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:33.554 19:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:33.554 19:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:33.554 00:05:33.554 real 0m2.851s 00:05:33.554 user 0m1.319s 00:05:33.554 sys 0m0.238s 00:05:33.554 19:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.554 19:44:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.554 19:44:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:33.554 19:44:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61152 ]] 00:05:33.554 19:44:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61152 00:05:33.554 19:44:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61152 ']' 00:05:33.554 19:44:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61152 00:05:33.554 19:44:02 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:33.554 19:44:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.554 19:44:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61152 00:05:33.554 killing process with pid 61152 00:05:33.554 19:44:02 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.554 19:44:02 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.554 19:44:02 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61152' 00:05:33.554 19:44:02 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61152 00:05:33.554 19:44:02 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61152 00:05:33.832 19:44:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61174 ]] 00:05:33.832 19:44:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61174 00:05:33.832 19:44:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61174 ']' 00:05:33.832 19:44:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61174 00:05:33.832 19:44:02 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:33.832 19:44:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.832 19:44:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61174 00:05:34.090 killing process with pid 61174 00:05:34.090 19:44:02 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:34.090 19:44:02 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:34.090 19:44:02 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61174' 00:05:34.090 19:44:02 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61174 00:05:34.090 19:44:02 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61174 00:05:34.656 19:44:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.656 19:44:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:34.656 19:44:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61152 ]] 00:05:34.656 19:44:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61152 00:05:34.656 19:44:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61152 ']' 00:05:34.656 Process with pid 61152 is not found 00:05:34.656 19:44:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61152 00:05:34.656 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61152) - No such process 00:05:34.656 19:44:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61152 is not found' 00:05:34.656 19:44:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61174 ]] 00:05:34.656 19:44:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61174 00:05:34.656 19:44:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61174 ']' 00:05:34.656 Process with pid 61174 is not found 00:05:34.656 19:44:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61174 00:05:34.656 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61174) - No such process 00:05:34.656 19:44:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61174 is not found' 00:05:34.656 19:44:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.656 ************************************ 00:05:34.656 END TEST cpu_locks 00:05:34.656 ************************************ 00:05:34.656 00:05:34.656 real 0m25.243s 00:05:34.656 user 0m40.499s 00:05:34.656 sys 0m7.351s 00:05:34.656 19:44:03 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.656 19:44:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.656 ************************************ 00:05:34.656 END TEST event 00:05:34.656 ************************************ 00:05:34.656 00:05:34.656 real 0m54.708s 00:05:34.656 user 1m40.379s 00:05:34.656 sys 0m11.876s 00:05:34.656 19:44:03 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.656 19:44:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.656 19:44:03 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:34.656 19:44:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.656 19:44:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.656 19:44:03 -- common/autotest_common.sh@10 -- # set +x 00:05:34.656 ************************************ 00:05:34.656 START TEST thread 00:05:34.656 ************************************ 00:05:34.656 19:44:03 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:34.656 * Looking for test storage... 00:05:34.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:34.656 19:44:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.656 19:44:03 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:34.656 19:44:03 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.656 19:44:03 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.656 ************************************ 00:05:34.656 START TEST thread_poller_perf 00:05:34.656 ************************************ 00:05:34.656 19:44:03 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.914 [2024-07-24 19:44:03.329111] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:34.914 [2024-07-24 19:44:03.329238] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61298 ] 00:05:34.914 [2024-07-24 19:44:03.467559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.171 [2024-07-24 19:44:03.639767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.171 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:36.128 ====================================== 00:05:36.128 busy:2109834844 (cyc) 00:05:36.128 total_run_count: 342000 00:05:36.128 tsc_hz: 2100000000 (cyc) 00:05:36.128 ====================================== 00:05:36.128 poller_cost: 6169 (cyc), 2937 (nsec) 00:05:36.128 00:05:36.128 real 0m1.467s 00:05:36.128 user 0m1.268s 00:05:36.128 sys 0m0.089s 00:05:36.128 19:44:04 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.128 19:44:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.128 ************************************ 00:05:36.128 END TEST thread_poller_perf 00:05:36.128 ************************************ 00:05:36.387 19:44:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.387 19:44:04 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:36.387 19:44:04 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.387 19:44:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.387 ************************************ 00:05:36.387 START TEST thread_poller_perf 00:05:36.387 ************************************ 00:05:36.387 19:44:04 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.387 [2024-07-24 19:44:04.852578] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:36.387 [2024-07-24 19:44:04.853518] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61339 ] 00:05:36.387 [2024-07-24 19:44:04.989857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.645 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:36.645 [2024-07-24 19:44:05.155337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.581 ====================================== 00:05:37.581 busy:2102251222 (cyc) 00:05:37.581 total_run_count: 4177000 00:05:37.581 tsc_hz: 2100000000 (cyc) 00:05:37.581 ====================================== 00:05:37.581 poller_cost: 503 (cyc), 239 (nsec) 00:05:37.581 00:05:37.581 real 0m1.405s 00:05:37.581 user 0m1.212s 00:05:37.581 sys 0m0.082s 00:05:37.581 19:44:06 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.581 19:44:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.581 ************************************ 00:05:37.581 END TEST thread_poller_perf 00:05:37.581 ************************************ 00:05:37.840 19:44:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:37.840 ************************************ 00:05:37.840 END TEST thread 00:05:37.840 ************************************ 00:05:37.840 00:05:37.840 real 0m3.077s 00:05:37.840 user 0m2.540s 00:05:37.840 sys 0m0.314s 00:05:37.840 19:44:06 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.840 19:44:06 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.840 19:44:06 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:05:37.840 19:44:06 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:37.840 19:44:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.840 19:44:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.840 19:44:06 -- common/autotest_common.sh@10 -- # set +x 00:05:37.840 ************************************ 00:05:37.840 START TEST app_cmdline 00:05:37.840 ************************************ 00:05:37.840 19:44:06 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:37.840 * Looking for test storage... 00:05:37.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:37.840 19:44:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:37.840 19:44:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61414 00:05:37.840 19:44:06 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:37.840 19:44:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61414 00:05:37.840 19:44:06 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61414 ']' 00:05:37.840 19:44:06 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.840 19:44:06 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.840 19:44:06 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.840 19:44:06 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.840 19:44:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:38.099 [2024-07-24 19:44:06.517884] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:38.099 [2024-07-24 19:44:06.518069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61414 ] 00:05:38.099 [2024-07-24 19:44:06.668508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.357 [2024-07-24 19:44:06.782634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.357 [2024-07-24 19:44:06.828996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:39.287 19:44:07 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.287 19:44:07 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:39.287 19:44:07 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:39.544 { 00:05:39.544 "version": "SPDK v24.09-pre git sha1 0c322284f", 00:05:39.544 "fields": { 00:05:39.544 "major": 24, 00:05:39.544 "minor": 9, 00:05:39.544 "patch": 0, 00:05:39.544 "suffix": "-pre", 00:05:39.544 "commit": "0c322284f" 00:05:39.544 } 00:05:39.544 } 00:05:39.544 19:44:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:39.544 19:44:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:39.544 19:44:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:39.544 19:44:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:39.544 19:44:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:39.544 19:44:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:39.544 19:44:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:39.544 19:44:07 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.544 19:44:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:39.544 19:44:08 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.544 19:44:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:39.544 19:44:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:39.544 19:44:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.544 19:44:08 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:39.544 19:44:08 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.544 19:44:08 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.545 19:44:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.545 19:44:08 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.545 19:44:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.545 19:44:08 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.545 19:44:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.545 19:44:08 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.545 19:44:08 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:39.545 19:44:08 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.804 request: 00:05:39.804 { 00:05:39.804 "method": "env_dpdk_get_mem_stats", 00:05:39.804 "req_id": 1 00:05:39.804 } 00:05:39.804 Got JSON-RPC error response 00:05:39.804 response: 00:05:39.804 { 00:05:39.804 "code": -32601, 00:05:39.804 "message": "Method not found" 00:05:39.804 } 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.804 19:44:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61414 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61414 ']' 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61414 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61414 00:05:39.804 killing process with pid 61414 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61414' 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@969 -- # kill 61414 00:05:39.804 19:44:08 app_cmdline -- common/autotest_common.sh@974 -- # wait 61414 00:05:40.084 ************************************ 00:05:40.084 END TEST app_cmdline 00:05:40.084 ************************************ 00:05:40.084 00:05:40.084 real 0m2.326s 00:05:40.084 user 0m3.050s 00:05:40.084 sys 0m0.517s 00:05:40.084 19:44:08 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.084 19:44:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.084 19:44:08 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:40.084 19:44:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.084 19:44:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.084 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:05:40.084 ************************************ 00:05:40.084 START TEST version 00:05:40.084 ************************************ 00:05:40.084 19:44:08 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:40.343 * Looking for test storage... 00:05:40.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:40.343 19:44:08 version -- app/version.sh@17 -- # get_header_version major 00:05:40.343 19:44:08 version -- app/version.sh@14 -- # cut -f2 00:05:40.343 19:44:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.343 19:44:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.343 19:44:08 version -- app/version.sh@17 -- # major=24 00:05:40.343 19:44:08 version -- app/version.sh@18 -- # get_header_version minor 00:05:40.343 19:44:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.343 19:44:08 version -- app/version.sh@14 -- # cut -f2 00:05:40.343 19:44:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.343 19:44:08 version -- app/version.sh@18 -- # minor=9 00:05:40.343 19:44:08 version -- app/version.sh@19 -- # get_header_version patch 00:05:40.343 19:44:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.343 19:44:08 version -- app/version.sh@14 -- # cut -f2 00:05:40.343 19:44:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.343 19:44:08 version -- app/version.sh@19 -- # patch=0 00:05:40.343 19:44:08 version -- app/version.sh@20 -- # get_header_version suffix 00:05:40.343 19:44:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.343 19:44:08 version -- app/version.sh@14 -- # cut -f2 00:05:40.343 19:44:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.343 19:44:08 version -- app/version.sh@20 -- # suffix=-pre 00:05:40.343 19:44:08 version -- app/version.sh@22 -- # version=24.9 00:05:40.343 19:44:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:40.343 19:44:08 version -- app/version.sh@28 -- # version=24.9rc0 00:05:40.343 19:44:08 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:40.343 19:44:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:40.343 19:44:08 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:40.343 19:44:08 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:40.343 00:05:40.343 real 0m0.175s 00:05:40.343 user 0m0.099s 00:05:40.343 sys 0m0.107s 00:05:40.343 ************************************ 00:05:40.343 END TEST version 00:05:40.343 ************************************ 00:05:40.343 19:44:08 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.343 19:44:08 version -- common/autotest_common.sh@10 -- # set +x 00:05:40.343 19:44:08 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:05:40.343 19:44:08 -- spdk/autotest.sh@202 -- # uname -s 00:05:40.343 19:44:08 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:05:40.343 19:44:08 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:40.343 19:44:08 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:05:40.343 19:44:08 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:05:40.343 19:44:08 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:40.343 19:44:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.343 19:44:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.343 19:44:08 -- common/autotest_common.sh@10 -- # set +x 00:05:40.343 ************************************ 00:05:40.343 START TEST spdk_dd 00:05:40.343 ************************************ 00:05:40.343 19:44:08 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:40.601 * Looking for test storage... 00:05:40.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:40.601 19:44:09 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.601 19:44:09 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.601 19:44:09 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.601 19:44:09 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.601 19:44:09 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.601 19:44:09 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.601 19:44:09 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.601 19:44:09 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:40.601 19:44:09 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.601 19:44:09 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.859 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.859 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.859 19:44:09 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:40.859 19:44:09 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:40.859 19:44:09 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:05:40.859 19:44:09 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:05:40.859 19:44:09 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:05:40.859 19:44:09 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@230 -- # local class 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@232 -- # local progif 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@233 -- # class=01 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:40.860 19:44:09 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:41.119 19:44:09 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:41.119 19:44:09 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:41.119 19:44:09 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:05:41.119 19:44:09 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:41.119 19:44:09 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.119 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.120 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:41.121 * spdk_dd linked to liburing 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:41.121 19:44:09 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:41.121 19:44:09 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:41.121 19:44:09 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:41.121 19:44:09 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:41.121 19:44:09 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:41.121 19:44:09 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.121 19:44:09 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 ************************************ 00:05:41.121 START TEST spdk_dd_basic_rw 00:05:41.121 ************************************ 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:41.121 * Looking for test storage... 00:05:41.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:41.121 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:41.122 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:41.122 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:41.122 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:41.122 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:41.122 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:41.122 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.122 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:41.122 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:41.122 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:41.122 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.381 ************************************ 00:05:41.381 START TEST dd_bs_lt_native_bs 00:05:41.381 ************************************ 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:41.381 19:44:09 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:41.381 { 00:05:41.381 "subsystems": [ 00:05:41.381 { 00:05:41.381 "subsystem": "bdev", 00:05:41.381 "config": [ 00:05:41.381 { 00:05:41.381 "params": { 00:05:41.381 "trtype": "pcie", 00:05:41.381 "traddr": "0000:00:10.0", 00:05:41.381 "name": "Nvme0" 00:05:41.381 }, 00:05:41.381 "method": "bdev_nvme_attach_controller" 00:05:41.381 }, 00:05:41.382 { 00:05:41.382 "method": "bdev_wait_for_examine" 00:05:41.382 } 00:05:41.382 ] 00:05:41.382 } 00:05:41.382 ] 00:05:41.382 } 00:05:41.382 [2024-07-24 19:44:09.996880] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:41.382 [2024-07-24 19:44:09.997015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61741 ] 00:05:41.639 [2024-07-24 19:44:10.139015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.639 [2024-07-24 19:44:10.244021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.639 [2024-07-24 19:44:10.288186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:41.896 [2024-07-24 19:44:10.389770] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:41.896 [2024-07-24 19:44:10.389869] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:42.155 [2024-07-24 19:44:10.580748] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:42.155 ************************************ 00:05:42.155 END TEST dd_bs_lt_native_bs 00:05:42.155 ************************************ 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.155 00:05:42.155 real 0m0.789s 00:05:42.155 user 0m0.556s 00:05:42.155 sys 0m0.169s 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.155 ************************************ 00:05:42.155 START TEST dd_rw 00:05:42.155 ************************************ 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:42.155 19:44:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.105 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:43.105 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:43.105 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.105 19:44:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.105 { 00:05:43.105 "subsystems": [ 00:05:43.105 { 00:05:43.105 "subsystem": "bdev", 00:05:43.105 "config": [ 00:05:43.105 { 00:05:43.105 "params": { 00:05:43.105 "trtype": "pcie", 00:05:43.105 "traddr": "0000:00:10.0", 00:05:43.105 "name": "Nvme0" 00:05:43.105 }, 00:05:43.105 "method": "bdev_nvme_attach_controller" 00:05:43.105 }, 00:05:43.105 { 00:05:43.105 "method": "bdev_wait_for_examine" 00:05:43.105 } 00:05:43.105 ] 00:05:43.105 } 00:05:43.105 ] 00:05:43.105 } 00:05:43.105 [2024-07-24 19:44:11.588352] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:43.105 [2024-07-24 19:44:11.588835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61773 ] 00:05:43.105 [2024-07-24 19:44:11.734473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.363 [2024-07-24 19:44:11.920508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.363 [2024-07-24 19:44:12.005400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:43.878  Copying: 60/60 [kB] (average 29 MBps) 00:05:43.878 00:05:43.878 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:43.878 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:43.878 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.878 19:44:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.878 { 00:05:43.878 "subsystems": [ 00:05:43.878 { 00:05:43.878 "subsystem": "bdev", 00:05:43.878 "config": [ 00:05:43.878 { 00:05:43.878 "params": { 00:05:43.878 "trtype": "pcie", 00:05:43.878 "traddr": "0000:00:10.0", 00:05:43.878 "name": "Nvme0" 00:05:43.878 }, 00:05:43.878 "method": "bdev_nvme_attach_controller" 00:05:43.878 }, 00:05:43.878 { 00:05:43.878 "method": "bdev_wait_for_examine" 00:05:43.878 } 00:05:43.878 ] 00:05:43.878 } 00:05:43.878 ] 00:05:43.878 } 00:05:43.878 [2024-07-24 19:44:12.516745] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:43.878 [2024-07-24 19:44:12.516854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61791 ] 00:05:44.135 [2024-07-24 19:44:12.654203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.393 [2024-07-24 19:44:12.816171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.393 [2024-07-24 19:44:12.899387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:44.959  Copying: 60/60 [kB] (average 29 MBps) 00:05:44.959 00:05:44.959 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:44.959 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:44.959 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:44.959 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:44.959 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:44.959 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:44.959 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:44.959 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:44.959 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:44.959 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:44.959 19:44:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:44.959 { 00:05:44.959 "subsystems": [ 00:05:44.959 { 00:05:44.959 "subsystem": "bdev", 00:05:44.959 "config": [ 00:05:44.959 { 00:05:44.959 "params": { 00:05:44.959 "trtype": "pcie", 00:05:44.959 "traddr": "0000:00:10.0", 00:05:44.959 "name": "Nvme0" 00:05:44.959 }, 00:05:44.959 "method": "bdev_nvme_attach_controller" 00:05:44.959 }, 00:05:44.959 { 00:05:44.959 "method": "bdev_wait_for_examine" 00:05:44.959 } 00:05:44.959 ] 00:05:44.959 } 00:05:44.959 ] 00:05:44.959 } 00:05:44.959 [2024-07-24 19:44:13.427393] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:44.959 [2024-07-24 19:44:13.427824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61812 ] 00:05:44.959 [2024-07-24 19:44:13.571200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.217 [2024-07-24 19:44:13.747222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.217 [2024-07-24 19:44:13.838092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:45.733  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:45.733 00:05:45.733 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:45.733 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:45.733 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:45.733 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:45.733 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:45.733 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:45.733 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.299 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:46.299 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:46.299 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:46.299 19:44:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.300 [2024-07-24 19:44:14.959555] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:46.300 [2024-07-24 19:44:14.959679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61837 ] 00:05:46.605 { 00:05:46.605 "subsystems": [ 00:05:46.605 { 00:05:46.605 "subsystem": "bdev", 00:05:46.605 "config": [ 00:05:46.605 { 00:05:46.606 "params": { 00:05:46.606 "trtype": "pcie", 00:05:46.606 "traddr": "0000:00:10.0", 00:05:46.606 "name": "Nvme0" 00:05:46.606 }, 00:05:46.606 "method": "bdev_nvme_attach_controller" 00:05:46.606 }, 00:05:46.606 { 00:05:46.606 "method": "bdev_wait_for_examine" 00:05:46.606 } 00:05:46.606 ] 00:05:46.606 } 00:05:46.606 ] 00:05:46.606 } 00:05:46.606 [2024-07-24 19:44:15.095915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.863 [2024-07-24 19:44:15.295053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.863 [2024-07-24 19:44:15.387133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:47.428  Copying: 60/60 [kB] (average 58 MBps) 00:05:47.428 00:05:47.428 19:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:47.428 19:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:47.428 19:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:47.428 19:44:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.428 { 00:05:47.428 "subsystems": [ 00:05:47.428 { 00:05:47.428 "subsystem": "bdev", 00:05:47.428 "config": [ 00:05:47.428 { 00:05:47.428 "params": { 00:05:47.428 "trtype": "pcie", 00:05:47.428 "traddr": "0000:00:10.0", 00:05:47.428 "name": "Nvme0" 00:05:47.428 }, 00:05:47.428 "method": "bdev_nvme_attach_controller" 00:05:47.428 }, 00:05:47.428 { 00:05:47.428 "method": "bdev_wait_for_examine" 00:05:47.428 } 00:05:47.428 ] 00:05:47.428 } 00:05:47.428 ] 00:05:47.428 } 00:05:47.428 [2024-07-24 19:44:15.929329] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:47.428 [2024-07-24 19:44:15.929432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61856 ] 00:05:47.428 [2024-07-24 19:44:16.076087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.686 [2024-07-24 19:44:16.182838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.686 [2024-07-24 19:44:16.259844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:48.202  Copying: 60/60 [kB] (average 58 MBps) 00:05:48.202 00:05:48.202 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:48.202 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:48.202 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:48.202 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:48.202 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:48.202 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:48.202 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:48.202 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:48.202 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:48.202 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:48.202 19:44:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.202 { 00:05:48.202 "subsystems": [ 00:05:48.202 { 00:05:48.202 "subsystem": "bdev", 00:05:48.202 "config": [ 00:05:48.202 { 00:05:48.202 "params": { 00:05:48.202 "trtype": "pcie", 00:05:48.202 "traddr": "0000:00:10.0", 00:05:48.202 "name": "Nvme0" 00:05:48.202 }, 00:05:48.202 "method": "bdev_nvme_attach_controller" 00:05:48.202 }, 00:05:48.202 { 00:05:48.202 "method": "bdev_wait_for_examine" 00:05:48.202 } 00:05:48.202 ] 00:05:48.202 } 00:05:48.202 ] 00:05:48.202 } 00:05:48.202 [2024-07-24 19:44:16.762647] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:48.202 [2024-07-24 19:44:16.762752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61877 ] 00:05:48.460 [2024-07-24 19:44:16.909424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.460 [2024-07-24 19:44:17.030509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.460 [2024-07-24 19:44:17.080022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:48.976  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:48.976 00:05:48.976 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:48.976 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:48.976 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:48.976 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:48.976 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:48.976 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:48.976 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:48.976 19:44:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.541 19:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:49.541 19:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:49.541 19:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:49.541 19:44:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.541 { 00:05:49.541 "subsystems": [ 00:05:49.541 { 00:05:49.541 "subsystem": "bdev", 00:05:49.541 "config": [ 00:05:49.541 { 00:05:49.541 "params": { 00:05:49.541 "trtype": "pcie", 00:05:49.541 "traddr": "0000:00:10.0", 00:05:49.541 "name": "Nvme0" 00:05:49.541 }, 00:05:49.541 "method": "bdev_nvme_attach_controller" 00:05:49.541 }, 00:05:49.541 { 00:05:49.541 "method": "bdev_wait_for_examine" 00:05:49.541 } 00:05:49.541 ] 00:05:49.541 } 00:05:49.541 ] 00:05:49.541 } 00:05:49.541 [2024-07-24 19:44:18.136596] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:49.541 [2024-07-24 19:44:18.136733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61898 ] 00:05:49.799 [2024-07-24 19:44:18.280546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.799 [2024-07-24 19:44:18.447532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.057 [2024-07-24 19:44:18.532418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:50.623  Copying: 56/56 [kB] (average 54 MBps) 00:05:50.623 00:05:50.623 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:50.623 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:50.623 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.623 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.623 { 00:05:50.623 "subsystems": [ 00:05:50.623 { 00:05:50.623 "subsystem": "bdev", 00:05:50.623 "config": [ 00:05:50.623 { 00:05:50.623 "params": { 00:05:50.623 "trtype": "pcie", 00:05:50.623 "traddr": "0000:00:10.0", 00:05:50.623 "name": "Nvme0" 00:05:50.623 }, 00:05:50.623 "method": "bdev_nvme_attach_controller" 00:05:50.623 }, 00:05:50.623 { 00:05:50.623 "method": "bdev_wait_for_examine" 00:05:50.623 } 00:05:50.623 ] 00:05:50.623 } 00:05:50.623 ] 00:05:50.623 } 00:05:50.623 [2024-07-24 19:44:19.054181] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:50.623 [2024-07-24 19:44:19.054264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61917 ] 00:05:50.623 [2024-07-24 19:44:19.191835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.881 [2024-07-24 19:44:19.355176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.881 [2024-07-24 19:44:19.439914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:51.398  Copying: 56/56 [kB] (average 27 MBps) 00:05:51.398 00:05:51.398 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:51.398 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:51.398 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:51.398 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:51.398 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:51.398 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:51.398 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:51.398 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:51.398 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:51.398 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.398 19:44:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.398 { 00:05:51.398 "subsystems": [ 00:05:51.398 { 00:05:51.398 "subsystem": "bdev", 00:05:51.398 "config": [ 00:05:51.398 { 00:05:51.398 "params": { 00:05:51.398 "trtype": "pcie", 00:05:51.398 "traddr": "0000:00:10.0", 00:05:51.398 "name": "Nvme0" 00:05:51.398 }, 00:05:51.398 "method": "bdev_nvme_attach_controller" 00:05:51.398 }, 00:05:51.398 { 00:05:51.398 "method": "bdev_wait_for_examine" 00:05:51.398 } 00:05:51.398 ] 00:05:51.398 } 00:05:51.398 ] 00:05:51.398 } 00:05:51.398 [2024-07-24 19:44:19.965155] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:51.398 [2024-07-24 19:44:19.966089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61938 ] 00:05:51.656 [2024-07-24 19:44:20.107487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.656 [2024-07-24 19:44:20.285438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.915 [2024-07-24 19:44:20.388784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:52.238  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:52.238 00:05:52.238 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:52.238 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:52.238 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:52.238 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:52.238 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:52.238 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:52.238 19:44:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.820 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:52.820 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:52.820 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.820 19:44:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.820 { 00:05:52.820 "subsystems": [ 00:05:52.820 { 00:05:52.820 "subsystem": "bdev", 00:05:52.820 "config": [ 00:05:52.820 { 00:05:52.820 "params": { 00:05:52.820 "trtype": "pcie", 00:05:52.820 "traddr": "0000:00:10.0", 00:05:52.820 "name": "Nvme0" 00:05:52.820 }, 00:05:52.820 "method": "bdev_nvme_attach_controller" 00:05:52.820 }, 00:05:52.820 { 00:05:52.820 "method": "bdev_wait_for_examine" 00:05:52.820 } 00:05:52.820 ] 00:05:52.820 } 00:05:52.820 ] 00:05:52.820 } 00:05:52.820 [2024-07-24 19:44:21.442210] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:52.820 [2024-07-24 19:44:21.442314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61958 ] 00:05:53.079 [2024-07-24 19:44:21.585564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.337 [2024-07-24 19:44:21.751298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.337 [2024-07-24 19:44:21.829044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:53.903  Copying: 56/56 [kB] (average 54 MBps) 00:05:53.903 00:05:53.903 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:53.903 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:53.903 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.903 19:44:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.903 { 00:05:53.903 "subsystems": [ 00:05:53.903 { 00:05:53.903 "subsystem": "bdev", 00:05:53.903 "config": [ 00:05:53.903 { 00:05:53.903 "params": { 00:05:53.903 "trtype": "pcie", 00:05:53.903 "traddr": "0000:00:10.0", 00:05:53.903 "name": "Nvme0" 00:05:53.903 }, 00:05:53.903 "method": "bdev_nvme_attach_controller" 00:05:53.903 }, 00:05:53.903 { 00:05:53.903 "method": "bdev_wait_for_examine" 00:05:53.903 } 00:05:53.903 ] 00:05:53.903 } 00:05:53.903 ] 00:05:53.903 } 00:05:53.903 [2024-07-24 19:44:22.357773] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:53.903 [2024-07-24 19:44:22.357907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61976 ] 00:05:53.903 [2024-07-24 19:44:22.505215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.161 [2024-07-24 19:44:22.686225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.161 [2024-07-24 19:44:22.773779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:54.677  Copying: 56/56 [kB] (average 54 MBps) 00:05:54.677 00:05:54.677 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:54.677 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:54.677 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:54.677 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:54.678 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:54.678 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:54.678 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:54.678 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:54.678 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:54.678 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.678 19:44:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.678 [2024-07-24 19:44:23.316840] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:54.678 [2024-07-24 19:44:23.316989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61997 ] 00:05:54.678 { 00:05:54.678 "subsystems": [ 00:05:54.678 { 00:05:54.678 "subsystem": "bdev", 00:05:54.678 "config": [ 00:05:54.678 { 00:05:54.678 "params": { 00:05:54.678 "trtype": "pcie", 00:05:54.678 "traddr": "0000:00:10.0", 00:05:54.678 "name": "Nvme0" 00:05:54.678 }, 00:05:54.678 "method": "bdev_nvme_attach_controller" 00:05:54.678 }, 00:05:54.678 { 00:05:54.678 "method": "bdev_wait_for_examine" 00:05:54.678 } 00:05:54.678 ] 00:05:54.678 } 00:05:54.678 ] 00:05:54.678 } 00:05:54.934 [2024-07-24 19:44:23.454347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.191 [2024-07-24 19:44:23.622629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.191 [2024-07-24 19:44:23.705103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.756  Copying: 1024/1024 [kB] (average 500 MBps) 00:05:55.756 00:05:55.756 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:55.756 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:55.756 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:55.756 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:55.756 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:55.756 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:55.756 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:55.756 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.322 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:56.323 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:56.323 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:56.323 19:44:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.323 [2024-07-24 19:44:24.754172] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:56.323 [2024-07-24 19:44:24.754989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62021 ] 00:05:56.323 { 00:05:56.323 "subsystems": [ 00:05:56.323 { 00:05:56.323 "subsystem": "bdev", 00:05:56.323 "config": [ 00:05:56.323 { 00:05:56.323 "params": { 00:05:56.323 "trtype": "pcie", 00:05:56.323 "traddr": "0000:00:10.0", 00:05:56.323 "name": "Nvme0" 00:05:56.323 }, 00:05:56.323 "method": "bdev_nvme_attach_controller" 00:05:56.323 }, 00:05:56.323 { 00:05:56.323 "method": "bdev_wait_for_examine" 00:05:56.323 } 00:05:56.323 ] 00:05:56.323 } 00:05:56.323 ] 00:05:56.323 } 00:05:56.323 [2024-07-24 19:44:24.895360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.581 [2024-07-24 19:44:25.070687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.581 [2024-07-24 19:44:25.157878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:57.098  Copying: 48/48 [kB] (average 46 MBps) 00:05:57.098 00:05:57.098 19:44:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:57.098 19:44:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:57.098 19:44:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.098 19:44:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.098 { 00:05:57.098 "subsystems": [ 00:05:57.098 { 00:05:57.098 "subsystem": "bdev", 00:05:57.098 "config": [ 00:05:57.098 { 00:05:57.098 "params": { 00:05:57.098 "trtype": "pcie", 00:05:57.098 "traddr": "0000:00:10.0", 00:05:57.098 "name": "Nvme0" 00:05:57.098 }, 00:05:57.098 "method": "bdev_nvme_attach_controller" 00:05:57.098 }, 00:05:57.098 { 00:05:57.098 "method": "bdev_wait_for_examine" 00:05:57.098 } 00:05:57.098 ] 00:05:57.098 } 00:05:57.098 ] 00:05:57.098 } 00:05:57.098 [2024-07-24 19:44:25.698205] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:57.098 [2024-07-24 19:44:25.698353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62035 ] 00:05:57.356 [2024-07-24 19:44:25.887559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.614 [2024-07-24 19:44:26.057713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.614 [2024-07-24 19:44:26.140332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.181  Copying: 48/48 [kB] (average 46 MBps) 00:05:58.181 00:05:58.181 19:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.181 19:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:58.181 19:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:58.181 19:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:58.181 19:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:58.181 19:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:58.181 19:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:58.181 19:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:58.181 19:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:58.181 19:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:58.181 19:44:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.181 { 00:05:58.181 "subsystems": [ 00:05:58.181 { 00:05:58.181 "subsystem": "bdev", 00:05:58.181 "config": [ 00:05:58.181 { 00:05:58.181 "params": { 00:05:58.181 "trtype": "pcie", 00:05:58.181 "traddr": "0000:00:10.0", 00:05:58.181 "name": "Nvme0" 00:05:58.181 }, 00:05:58.181 "method": "bdev_nvme_attach_controller" 00:05:58.181 }, 00:05:58.181 { 00:05:58.181 "method": "bdev_wait_for_examine" 00:05:58.181 } 00:05:58.181 ] 00:05:58.181 } 00:05:58.181 ] 00:05:58.181 } 00:05:58.181 [2024-07-24 19:44:26.675880] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:58.182 [2024-07-24 19:44:26.676288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62056 ] 00:05:58.182 [2024-07-24 19:44:26.813185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.440 [2024-07-24 19:44:26.981379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.440 [2024-07-24 19:44:27.067392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.956  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:58.956 00:05:58.956 19:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:58.956 19:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:58.956 19:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:58.956 19:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:58.956 19:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:58.956 19:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:58.956 19:44:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.890 19:44:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:59.890 19:44:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:59.890 19:44:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:59.890 19:44:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.890 { 00:05:59.890 "subsystems": [ 00:05:59.890 { 00:05:59.890 "subsystem": "bdev", 00:05:59.890 "config": [ 00:05:59.890 { 00:05:59.890 "params": { 00:05:59.890 "trtype": "pcie", 00:05:59.890 "traddr": "0000:00:10.0", 00:05:59.890 "name": "Nvme0" 00:05:59.890 }, 00:05:59.890 "method": "bdev_nvme_attach_controller" 00:05:59.890 }, 00:05:59.890 { 00:05:59.890 "method": "bdev_wait_for_examine" 00:05:59.890 } 00:05:59.890 ] 00:05:59.890 } 00:05:59.890 ] 00:05:59.890 } 00:05:59.890 [2024-07-24 19:44:28.340412] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:59.890 [2024-07-24 19:44:28.340533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62086 ] 00:05:59.890 [2024-07-24 19:44:28.482976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.149 [2024-07-24 19:44:28.647028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.149 [2024-07-24 19:44:28.731457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.689  Copying: 48/48 [kB] (average 46 MBps) 00:06:00.689 00:06:00.689 19:44:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:00.689 19:44:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:00.689 19:44:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.689 19:44:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.689 { 00:06:00.689 "subsystems": [ 00:06:00.689 { 00:06:00.689 "subsystem": "bdev", 00:06:00.689 "config": [ 00:06:00.689 { 00:06:00.689 "params": { 00:06:00.689 "trtype": "pcie", 00:06:00.689 "traddr": "0000:00:10.0", 00:06:00.689 "name": "Nvme0" 00:06:00.689 }, 00:06:00.689 "method": "bdev_nvme_attach_controller" 00:06:00.689 }, 00:06:00.689 { 00:06:00.689 "method": "bdev_wait_for_examine" 00:06:00.689 } 00:06:00.689 ] 00:06:00.689 } 00:06:00.689 ] 00:06:00.689 } 00:06:00.689 [2024-07-24 19:44:29.270036] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:00.689 [2024-07-24 19:44:29.270150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62105 ] 00:06:00.948 [2024-07-24 19:44:29.418313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.948 [2024-07-24 19:44:29.585551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.207 [2024-07-24 19:44:29.668587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:01.465  Copying: 48/48 [kB] (average 46 MBps) 00:06:01.465 00:06:01.723 19:44:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:01.723 19:44:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:01.723 19:44:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:01.723 19:44:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:01.723 19:44:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:01.723 19:44:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:01.723 19:44:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:01.723 19:44:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:01.723 19:44:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:01.723 19:44:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:01.723 19:44:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.723 [2024-07-24 19:44:30.207054] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:01.723 [2024-07-24 19:44:30.207158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62129 ] 00:06:01.723 { 00:06:01.723 "subsystems": [ 00:06:01.723 { 00:06:01.723 "subsystem": "bdev", 00:06:01.723 "config": [ 00:06:01.723 { 00:06:01.723 "params": { 00:06:01.723 "trtype": "pcie", 00:06:01.723 "traddr": "0000:00:10.0", 00:06:01.723 "name": "Nvme0" 00:06:01.723 }, 00:06:01.723 "method": "bdev_nvme_attach_controller" 00:06:01.723 }, 00:06:01.723 { 00:06:01.723 "method": "bdev_wait_for_examine" 00:06:01.723 } 00:06:01.723 ] 00:06:01.723 } 00:06:01.723 ] 00:06:01.723 } 00:06:01.723 [2024-07-24 19:44:30.350243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.982 [2024-07-24 19:44:30.508297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.982 [2024-07-24 19:44:30.590013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.500  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:02.500 00:06:02.500 ************************************ 00:06:02.500 END TEST dd_rw 00:06:02.500 ************************************ 00:06:02.500 00:06:02.500 real 0m20.275s 00:06:02.500 user 0m14.614s 00:06:02.500 sys 0m8.150s 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.500 ************************************ 00:06:02.500 START TEST dd_rw_offset 00:06:02.500 ************************************ 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=9nz87wtt00lmcwtmzf18zb8glaklli2o3i7m7gpvxnjh3mb8fuh2xgsa4m02wpj4htf7i84pmdt8nchya2ic7v8ujf7fh5aa6ghfff2s0dzrrj5ihnpua351d1hsokdodjdesxnudih38jojrfhdl0gig9arut7f8but6cpd5fzolzjyjs9g2h7lwsegx7d6iwlh21u18myaw44g2nt12wj8pj0tzajqpm1l9n9owe2i6hzag7xlq6bfkwrchx45d10vko6ca72g583fb3x0447f1jfuiy6n6noakspide6qyhdhmchas6y71qgyjsppn4v5k5krk8nh1gv6otatf6a7rir7brsru0pdrutpb4kkjhclpxhn7yrdh16uvtpcr3teysg15727tyxkwi6edb8ow02hurj3jqordp3baka40kpma45j6umirrbl51ap6xw4kdhinxq35hik01zqsy2kmep4dxufhgj8hc51vfty7w66gn7bh9pcjgqbklu73pc5dyocmt815lw22gksytf8uzwk1uxo5m1u6sok14zilsl99slv2gwjazygijhh5j8zw5gmb4v5e0m20kt8nrd57wjb418xgcms07wyh6dgk1sheharhw33wzgcnw6reviq13m5te4iklwkknecae56uyj6d2f6cdsrh397gbbfzquckrni8nkob0w5ix3il62pom3wtgbmqjyp858dek5f81idvaz0t8j9o2vt11kr79eend5f3hamjx389uq5looy9m4g30vde1etjpfaee9d7vfse49k1nzplhcvwk3apyrvyhf1holekyeqyd4w6t57xfrs08c1f5gxyxddqw4db7qk68ri933lamxwnjm51pgqe6pmtqzfm8yo3tzxdpsbvw9ksszsinmtyzsvityzgc1h143hrlg7xobtn7yyn9pm5xsg9eti8tot6rl73sbtucot9d08vmwedzvyfbuymjxi214acburuyvgwwg4ni0g5p8v0kkpfwxuzengbchbaq2my9rzj53ns8geff9otajsdxpo5fdqwido4az6wvhpc3qzax1sdheq37yanyfdrx3ow3rtwtg065mck5v4bwetxk7b894n4nde9e6r6m96z3i3dsacebtmq4u5ksi6znlatq3jz02t7sgz4fwc64t82yf59lbpmmj2mrxi5p34xz09te8zsqssgd83rr9swxahsc483zqmtfki5tjhg3xa9c3wtz9q5frqvm7ee6lild78kcs65i2brvg894bnpxlwvmvlrtp30bo79jocu2g7ao6gf1nbie0hiqyxw0wt1ao3ekrjcuc9yjx91o5odlqueey29cjt1giugw6cfwu192j9i1od1ncqrbirok9aot7zajv45b04rcxlzd0xiit253mc1dl8yz2ut0x9tc53zurfmolb2jp0v8obd13k7fxfabxg4v6dat3eoz7qgz7dshgphtqz4vpfyfl0eq5tnkbepuilh1jiwhtaimvbr7f178b4430gh7pralwclt7na0h4wmgeyx46i98lomqzfrulnjbypbx78qd5vdwcdu39w43ltxwi4sma9ywi1n850t4adfbizhrcu0gjy28wsp8zijtqz2sh8uo8gy2n9ft88nrj5inj5z0ed2rzanz52gbhk8uqz7pnwlkdv715fnd702niz4rw6cfyt5qnbg4k21wr28hln5iqkt72v7atzooytgtgli9yj75te5wx6jt850qbqenru72auo2czwhvh675pk72495643oij51n3yunpop66lfnwc2cospcrxnd93dons2ol2h6zdln23neo43cnh5g8d6voqmm5ycn3lq14b71l5wvxw6cefnx6s96fkqlylab2u8x50ids32zp1rr5p8byvrk3ytg2hk5sj2uyw6mh2947xg7t72uylzsgo1jmgc5s31llnicdhfmolcmv5rngy37sqw553sttl3vjboio5vixptlpcjqlf8ykr1nyp0h65guckqmxa4dm0l8hut6naldjioiusx826q2i0cprvdbco7t3rdzx20j3u4wqaq0cw5utk7isj0ge5mwruspr976hoarm4z6o3u308y34jzmv8o1djklwwz66sj62jiabqyfmqem1ddzby67u0zm82rx3fgrh0j8hbwootyjtl9uszu0an2man1oodnrj3c666j688gde50ao53tpz1tzyap0cvvqx4x6cvynsypctcyvvf2cjxff0wi562i58fx3xxdtk2isokmk3ixtx5ipbcrb4eoxuqir2vkzfj1tdp4etmjabaoi2rid2hvv3t4yrimi6kt5tvfr89d822wyi13dm9jbfeof2bj7ef1i7g7dqy2dfwtjw3fynikwb6cv37lzqwdnxogxvbkqgr7za9vwvqx2zycrw7y676e9ehcnwcz4jtvpsqnmn3xcpg5jsfds0g6dk48u4dcbi1bdcfgp5kqfvcq6i2kilyh43empzstlag0f789t0q9dgiyry2rflb6hp7hinpakv6vi7t833j8oszm0dlgixol3su64n83u7uylf9ii73542csn8emj2305bboeu2urnwx2tpoem2m4v6aspvyqolvbchwj4tpimpd7mffqdzd4ofk018f5a198nva4yf7gde2gq6w4tmvpxsdccvylo5ahpxiwhb4w2qep6nxtsg2pv3vji5shinpjg2g2sd11qen7pdzaec0sp642y46vd86m783lbupzxmosz7j77j78nmzirg13snxpg9cwpeii0zznxhttbxujdi8nmdghjprlukrt3skiwvjj037v6382i67fblvah68dxjic6ybzw7gst2g1lwsltcsja2wwqxi4v1nam7bj3uifbn8yixds6z58ot6y1ish6iyel2eqg0o0k1qa32idq2wmyyyruoybf5oc6psag0pc09lgngukiiciw5t2wdqtofh0gcb43tf1ii72j5t1p92fy6oqk1tm370e7t4n1k3g8sqbtg2t6o2vk524qodlvzqt8mkkqaccsn8o1ezmr0nn4n3k5qs0bfj1qcl0n5g2pac65xe2kblajjdzm8w1k5238hg2guyqjm8k1n2aylv0zbvu0eyv8wir7g28hf2vynfueygrxnpa5ta0htle0aogzsilrhrwwib326j5a0963es5wsc0q7s9curqgbsyl5cuggihxsv16twm18dyglccz7q061tlpl9dwlv5qlevjp32zumrieiaetdot0vm396zeib1ug3wdrkwb1r0bhue2z07y0xt8xmfcun02jt0fml81p8q4f9v6blgz2grn8ilo03ovxpw41dvr8ayowvknwv1hvczvko0um0vwmi939kwlbvsaor81tmb8hi2aub51f555iabnecxhczgh3yjm77ixdyvjyfqrqwjuvc9lai2xs6rq9zueurcp48j2zma5o4ly03tiuux8uuyg5widxdz63pglem7f2ddha207s7yx6m3hqmapn7ncmfdlxa9ks8qlm551co6purnp0v79bw87pnraxfst5d4kqeb92r5imy527xi9bxz8ea7vkfpg8j0bujmwwn2b69ecgn42w9xq0u77dpkds017n139rvqryakzr0lzrc4ig0o9wdqgep8ybuo6dbvv2rszmsy4aox03xwm0livpuzlckwwc6cojq1zuhjsefb02jfuswhvjv5x8umuvlbvb5xoyqlc9di9pk03ez1zjbpez3rc21oz8xfuhbbqesy5zmagsu5u6yrwdqx1vp2palndrowprfpkhuxjrlsnf4tduk8gzshyvfvj505bx4lkp1ti141tpf9st6uct9m55s569o2f5ln3fdz43b9dl7246al2yi20v6pgfr64lx1ud5ksk2cvfbfvbjy94pl8ltwiyfxo5o2vr7lunw9q2jrbemsanzreajjrjcaooo5q6xk70jrwc7hz9l414nwjgisldz807ti9pbhmx9lcu1jrmiqadxa8ivwbhif4i8khk3avkot0r58wfucgkot2n9n8p3gdyxq2euly6c 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:02.500 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:02.759 [2024-07-24 19:44:31.195943] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:02.759 [2024-07-24 19:44:31.196062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62165 ] 00:06:02.759 { 00:06:02.759 "subsystems": [ 00:06:02.759 { 00:06:02.759 "subsystem": "bdev", 00:06:02.759 "config": [ 00:06:02.759 { 00:06:02.759 "params": { 00:06:02.759 "trtype": "pcie", 00:06:02.759 "traddr": "0000:00:10.0", 00:06:02.759 "name": "Nvme0" 00:06:02.759 }, 00:06:02.759 "method": "bdev_nvme_attach_controller" 00:06:02.759 }, 00:06:02.759 { 00:06:02.759 "method": "bdev_wait_for_examine" 00:06:02.759 } 00:06:02.759 ] 00:06:02.759 } 00:06:02.759 ] 00:06:02.759 } 00:06:02.759 [2024-07-24 19:44:31.334381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.018 [2024-07-24 19:44:31.445607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.018 [2024-07-24 19:44:31.490166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.349  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:03.349 00:06:03.349 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:03.349 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:03.349 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:03.349 19:44:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:03.349 [2024-07-24 19:44:31.841692] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:03.349 [2024-07-24 19:44:31.841854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62173 ] 00:06:03.349 { 00:06:03.349 "subsystems": [ 00:06:03.349 { 00:06:03.349 "subsystem": "bdev", 00:06:03.349 "config": [ 00:06:03.349 { 00:06:03.349 "params": { 00:06:03.349 "trtype": "pcie", 00:06:03.349 "traddr": "0000:00:10.0", 00:06:03.349 "name": "Nvme0" 00:06:03.349 }, 00:06:03.349 "method": "bdev_nvme_attach_controller" 00:06:03.349 }, 00:06:03.349 { 00:06:03.349 "method": "bdev_wait_for_examine" 00:06:03.349 } 00:06:03.349 ] 00:06:03.349 } 00:06:03.349 ] 00:06:03.349 } 00:06:03.349 [2024-07-24 19:44:31.985623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.608 [2024-07-24 19:44:32.091222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.609 [2024-07-24 19:44:32.134044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.870  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:03.870 00:06:03.870 19:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 9nz87wtt00lmcwtmzf18zb8glaklli2o3i7m7gpvxnjh3mb8fuh2xgsa4m02wpj4htf7i84pmdt8nchya2ic7v8ujf7fh5aa6ghfff2s0dzrrj5ihnpua351d1hsokdodjdesxnudih38jojrfhdl0gig9arut7f8but6cpd5fzolzjyjs9g2h7lwsegx7d6iwlh21u18myaw44g2nt12wj8pj0tzajqpm1l9n9owe2i6hzag7xlq6bfkwrchx45d10vko6ca72g583fb3x0447f1jfuiy6n6noakspide6qyhdhmchas6y71qgyjsppn4v5k5krk8nh1gv6otatf6a7rir7brsru0pdrutpb4kkjhclpxhn7yrdh16uvtpcr3teysg15727tyxkwi6edb8ow02hurj3jqordp3baka40kpma45j6umirrbl51ap6xw4kdhinxq35hik01zqsy2kmep4dxufhgj8hc51vfty7w66gn7bh9pcjgqbklu73pc5dyocmt815lw22gksytf8uzwk1uxo5m1u6sok14zilsl99slv2gwjazygijhh5j8zw5gmb4v5e0m20kt8nrd57wjb418xgcms07wyh6dgk1sheharhw33wzgcnw6reviq13m5te4iklwkknecae56uyj6d2f6cdsrh397gbbfzquckrni8nkob0w5ix3il62pom3wtgbmqjyp858dek5f81idvaz0t8j9o2vt11kr79eend5f3hamjx389uq5looy9m4g30vde1etjpfaee9d7vfse49k1nzplhcvwk3apyrvyhf1holekyeqyd4w6t57xfrs08c1f5gxyxddqw4db7qk68ri933lamxwnjm51pgqe6pmtqzfm8yo3tzxdpsbvw9ksszsinmtyzsvityzgc1h143hrlg7xobtn7yyn9pm5xsg9eti8tot6rl73sbtucot9d08vmwedzvyfbuymjxi214acburuyvgwwg4ni0g5p8v0kkpfwxuzengbchbaq2my9rzj53ns8geff9otajsdxpo5fdqwido4az6wvhpc3qzax1sdheq37yanyfdrx3ow3rtwtg065mck5v4bwetxk7b894n4nde9e6r6m96z3i3dsacebtmq4u5ksi6znlatq3jz02t7sgz4fwc64t82yf59lbpmmj2mrxi5p34xz09te8zsqssgd83rr9swxahsc483zqmtfki5tjhg3xa9c3wtz9q5frqvm7ee6lild78kcs65i2brvg894bnpxlwvmvlrtp30bo79jocu2g7ao6gf1nbie0hiqyxw0wt1ao3ekrjcuc9yjx91o5odlqueey29cjt1giugw6cfwu192j9i1od1ncqrbirok9aot7zajv45b04rcxlzd0xiit253mc1dl8yz2ut0x9tc53zurfmolb2jp0v8obd13k7fxfabxg4v6dat3eoz7qgz7dshgphtqz4vpfyfl0eq5tnkbepuilh1jiwhtaimvbr7f178b4430gh7pralwclt7na0h4wmgeyx46i98lomqzfrulnjbypbx78qd5vdwcdu39w43ltxwi4sma9ywi1n850t4adfbizhrcu0gjy28wsp8zijtqz2sh8uo8gy2n9ft88nrj5inj5z0ed2rzanz52gbhk8uqz7pnwlkdv715fnd702niz4rw6cfyt5qnbg4k21wr28hln5iqkt72v7atzooytgtgli9yj75te5wx6jt850qbqenru72auo2czwhvh675pk72495643oij51n3yunpop66lfnwc2cospcrxnd93dons2ol2h6zdln23neo43cnh5g8d6voqmm5ycn3lq14b71l5wvxw6cefnx6s96fkqlylab2u8x50ids32zp1rr5p8byvrk3ytg2hk5sj2uyw6mh2947xg7t72uylzsgo1jmgc5s31llnicdhfmolcmv5rngy37sqw553sttl3vjboio5vixptlpcjqlf8ykr1nyp0h65guckqmxa4dm0l8hut6naldjioiusx826q2i0cprvdbco7t3rdzx20j3u4wqaq0cw5utk7isj0ge5mwruspr976hoarm4z6o3u308y34jzmv8o1djklwwz66sj62jiabqyfmqem1ddzby67u0zm82rx3fgrh0j8hbwootyjtl9uszu0an2man1oodnrj3c666j688gde50ao53tpz1tzyap0cvvqx4x6cvynsypctcyvvf2cjxff0wi562i58fx3xxdtk2isokmk3ixtx5ipbcrb4eoxuqir2vkzfj1tdp4etmjabaoi2rid2hvv3t4yrimi6kt5tvfr89d822wyi13dm9jbfeof2bj7ef1i7g7dqy2dfwtjw3fynikwb6cv37lzqwdnxogxvbkqgr7za9vwvqx2zycrw7y676e9ehcnwcz4jtvpsqnmn3xcpg5jsfds0g6dk48u4dcbi1bdcfgp5kqfvcq6i2kilyh43empzstlag0f789t0q9dgiyry2rflb6hp7hinpakv6vi7t833j8oszm0dlgixol3su64n83u7uylf9ii73542csn8emj2305bboeu2urnwx2tpoem2m4v6aspvyqolvbchwj4tpimpd7mffqdzd4ofk018f5a198nva4yf7gde2gq6w4tmvpxsdccvylo5ahpxiwhb4w2qep6nxtsg2pv3vji5shinpjg2g2sd11qen7pdzaec0sp642y46vd86m783lbupzxmosz7j77j78nmzirg13snxpg9cwpeii0zznxhttbxujdi8nmdghjprlukrt3skiwvjj037v6382i67fblvah68dxjic6ybzw7gst2g1lwsltcsja2wwqxi4v1nam7bj3uifbn8yixds6z58ot6y1ish6iyel2eqg0o0k1qa32idq2wmyyyruoybf5oc6psag0pc09lgngukiiciw5t2wdqtofh0gcb43tf1ii72j5t1p92fy6oqk1tm370e7t4n1k3g8sqbtg2t6o2vk524qodlvzqt8mkkqaccsn8o1ezmr0nn4n3k5qs0bfj1qcl0n5g2pac65xe2kblajjdzm8w1k5238hg2guyqjm8k1n2aylv0zbvu0eyv8wir7g28hf2vynfueygrxnpa5ta0htle0aogzsilrhrwwib326j5a0963es5wsc0q7s9curqgbsyl5cuggihxsv16twm18dyglccz7q061tlpl9dwlv5qlevjp32zumrieiaetdot0vm396zeib1ug3wdrkwb1r0bhue2z07y0xt8xmfcun02jt0fml81p8q4f9v6blgz2grn8ilo03ovxpw41dvr8ayowvknwv1hvczvko0um0vwmi939kwlbvsaor81tmb8hi2aub51f555iabnecxhczgh3yjm77ixdyvjyfqrqwjuvc9lai2xs6rq9zueurcp48j2zma5o4ly03tiuux8uuyg5widxdz63pglem7f2ddha207s7yx6m3hqmapn7ncmfdlxa9ks8qlm551co6purnp0v79bw87pnraxfst5d4kqeb92r5imy527xi9bxz8ea7vkfpg8j0bujmwwn2b69ecgn42w9xq0u77dpkds017n139rvqryakzr0lzrc4ig0o9wdqgep8ybuo6dbvv2rszmsy4aox03xwm0livpuzlckwwc6cojq1zuhjsefb02jfuswhvjv5x8umuvlbvb5xoyqlc9di9pk03ez1zjbpez3rc21oz8xfuhbbqesy5zmagsu5u6yrwdqx1vp2palndrowprfpkhuxjrlsnf4tduk8gzshyvfvj505bx4lkp1ti141tpf9st6uct9m55s569o2f5ln3fdz43b9dl7246al2yi20v6pgfr64lx1ud5ksk2cvfbfvbjy94pl8ltwiyfxo5o2vr7lunw9q2jrbemsanzreajjrjcaooo5q6xk70jrwc7hz9l414nwjgisldz807ti9pbhmx9lcu1jrmiqadxa8ivwbhif4i8khk3avkot0r58wfucgkot2n9n8p3gdyxq2euly6c == \9\n\z\8\7\w\t\t\0\0\l\m\c\w\t\m\z\f\1\8\z\b\8\g\l\a\k\l\l\i\2\o\3\i\7\m\7\g\p\v\x\n\j\h\3\m\b\8\f\u\h\2\x\g\s\a\4\m\0\2\w\p\j\4\h\t\f\7\i\8\4\p\m\d\t\8\n\c\h\y\a\2\i\c\7\v\8\u\j\f\7\f\h\5\a\a\6\g\h\f\f\f\2\s\0\d\z\r\r\j\5\i\h\n\p\u\a\3\5\1\d\1\h\s\o\k\d\o\d\j\d\e\s\x\n\u\d\i\h\3\8\j\o\j\r\f\h\d\l\0\g\i\g\9\a\r\u\t\7\f\8\b\u\t\6\c\p\d\5\f\z\o\l\z\j\y\j\s\9\g\2\h\7\l\w\s\e\g\x\7\d\6\i\w\l\h\2\1\u\1\8\m\y\a\w\4\4\g\2\n\t\1\2\w\j\8\p\j\0\t\z\a\j\q\p\m\1\l\9\n\9\o\w\e\2\i\6\h\z\a\g\7\x\l\q\6\b\f\k\w\r\c\h\x\4\5\d\1\0\v\k\o\6\c\a\7\2\g\5\8\3\f\b\3\x\0\4\4\7\f\1\j\f\u\i\y\6\n\6\n\o\a\k\s\p\i\d\e\6\q\y\h\d\h\m\c\h\a\s\6\y\7\1\q\g\y\j\s\p\p\n\4\v\5\k\5\k\r\k\8\n\h\1\g\v\6\o\t\a\t\f\6\a\7\r\i\r\7\b\r\s\r\u\0\p\d\r\u\t\p\b\4\k\k\j\h\c\l\p\x\h\n\7\y\r\d\h\1\6\u\v\t\p\c\r\3\t\e\y\s\g\1\5\7\2\7\t\y\x\k\w\i\6\e\d\b\8\o\w\0\2\h\u\r\j\3\j\q\o\r\d\p\3\b\a\k\a\4\0\k\p\m\a\4\5\j\6\u\m\i\r\r\b\l\5\1\a\p\6\x\w\4\k\d\h\i\n\x\q\3\5\h\i\k\0\1\z\q\s\y\2\k\m\e\p\4\d\x\u\f\h\g\j\8\h\c\5\1\v\f\t\y\7\w\6\6\g\n\7\b\h\9\p\c\j\g\q\b\k\l\u\7\3\p\c\5\d\y\o\c\m\t\8\1\5\l\w\2\2\g\k\s\y\t\f\8\u\z\w\k\1\u\x\o\5\m\1\u\6\s\o\k\1\4\z\i\l\s\l\9\9\s\l\v\2\g\w\j\a\z\y\g\i\j\h\h\5\j\8\z\w\5\g\m\b\4\v\5\e\0\m\2\0\k\t\8\n\r\d\5\7\w\j\b\4\1\8\x\g\c\m\s\0\7\w\y\h\6\d\g\k\1\s\h\e\h\a\r\h\w\3\3\w\z\g\c\n\w\6\r\e\v\i\q\1\3\m\5\t\e\4\i\k\l\w\k\k\n\e\c\a\e\5\6\u\y\j\6\d\2\f\6\c\d\s\r\h\3\9\7\g\b\b\f\z\q\u\c\k\r\n\i\8\n\k\o\b\0\w\5\i\x\3\i\l\6\2\p\o\m\3\w\t\g\b\m\q\j\y\p\8\5\8\d\e\k\5\f\8\1\i\d\v\a\z\0\t\8\j\9\o\2\v\t\1\1\k\r\7\9\e\e\n\d\5\f\3\h\a\m\j\x\3\8\9\u\q\5\l\o\o\y\9\m\4\g\3\0\v\d\e\1\e\t\j\p\f\a\e\e\9\d\7\v\f\s\e\4\9\k\1\n\z\p\l\h\c\v\w\k\3\a\p\y\r\v\y\h\f\1\h\o\l\e\k\y\e\q\y\d\4\w\6\t\5\7\x\f\r\s\0\8\c\1\f\5\g\x\y\x\d\d\q\w\4\d\b\7\q\k\6\8\r\i\9\3\3\l\a\m\x\w\n\j\m\5\1\p\g\q\e\6\p\m\t\q\z\f\m\8\y\o\3\t\z\x\d\p\s\b\v\w\9\k\s\s\z\s\i\n\m\t\y\z\s\v\i\t\y\z\g\c\1\h\1\4\3\h\r\l\g\7\x\o\b\t\n\7\y\y\n\9\p\m\5\x\s\g\9\e\t\i\8\t\o\t\6\r\l\7\3\s\b\t\u\c\o\t\9\d\0\8\v\m\w\e\d\z\v\y\f\b\u\y\m\j\x\i\2\1\4\a\c\b\u\r\u\y\v\g\w\w\g\4\n\i\0\g\5\p\8\v\0\k\k\p\f\w\x\u\z\e\n\g\b\c\h\b\a\q\2\m\y\9\r\z\j\5\3\n\s\8\g\e\f\f\9\o\t\a\j\s\d\x\p\o\5\f\d\q\w\i\d\o\4\a\z\6\w\v\h\p\c\3\q\z\a\x\1\s\d\h\e\q\3\7\y\a\n\y\f\d\r\x\3\o\w\3\r\t\w\t\g\0\6\5\m\c\k\5\v\4\b\w\e\t\x\k\7\b\8\9\4\n\4\n\d\e\9\e\6\r\6\m\9\6\z\3\i\3\d\s\a\c\e\b\t\m\q\4\u\5\k\s\i\6\z\n\l\a\t\q\3\j\z\0\2\t\7\s\g\z\4\f\w\c\6\4\t\8\2\y\f\5\9\l\b\p\m\m\j\2\m\r\x\i\5\p\3\4\x\z\0\9\t\e\8\z\s\q\s\s\g\d\8\3\r\r\9\s\w\x\a\h\s\c\4\8\3\z\q\m\t\f\k\i\5\t\j\h\g\3\x\a\9\c\3\w\t\z\9\q\5\f\r\q\v\m\7\e\e\6\l\i\l\d\7\8\k\c\s\6\5\i\2\b\r\v\g\8\9\4\b\n\p\x\l\w\v\m\v\l\r\t\p\3\0\b\o\7\9\j\o\c\u\2\g\7\a\o\6\g\f\1\n\b\i\e\0\h\i\q\y\x\w\0\w\t\1\a\o\3\e\k\r\j\c\u\c\9\y\j\x\9\1\o\5\o\d\l\q\u\e\e\y\2\9\c\j\t\1\g\i\u\g\w\6\c\f\w\u\1\9\2\j\9\i\1\o\d\1\n\c\q\r\b\i\r\o\k\9\a\o\t\7\z\a\j\v\4\5\b\0\4\r\c\x\l\z\d\0\x\i\i\t\2\5\3\m\c\1\d\l\8\y\z\2\u\t\0\x\9\t\c\5\3\z\u\r\f\m\o\l\b\2\j\p\0\v\8\o\b\d\1\3\k\7\f\x\f\a\b\x\g\4\v\6\d\a\t\3\e\o\z\7\q\g\z\7\d\s\h\g\p\h\t\q\z\4\v\p\f\y\f\l\0\e\q\5\t\n\k\b\e\p\u\i\l\h\1\j\i\w\h\t\a\i\m\v\b\r\7\f\1\7\8\b\4\4\3\0\g\h\7\p\r\a\l\w\c\l\t\7\n\a\0\h\4\w\m\g\e\y\x\4\6\i\9\8\l\o\m\q\z\f\r\u\l\n\j\b\y\p\b\x\7\8\q\d\5\v\d\w\c\d\u\3\9\w\4\3\l\t\x\w\i\4\s\m\a\9\y\w\i\1\n\8\5\0\t\4\a\d\f\b\i\z\h\r\c\u\0\g\j\y\2\8\w\s\p\8\z\i\j\t\q\z\2\s\h\8\u\o\8\g\y\2\n\9\f\t\8\8\n\r\j\5\i\n\j\5\z\0\e\d\2\r\z\a\n\z\5\2\g\b\h\k\8\u\q\z\7\p\n\w\l\k\d\v\7\1\5\f\n\d\7\0\2\n\i\z\4\r\w\6\c\f\y\t\5\q\n\b\g\4\k\2\1\w\r\2\8\h\l\n\5\i\q\k\t\7\2\v\7\a\t\z\o\o\y\t\g\t\g\l\i\9\y\j\7\5\t\e\5\w\x\6\j\t\8\5\0\q\b\q\e\n\r\u\7\2\a\u\o\2\c\z\w\h\v\h\6\7\5\p\k\7\2\4\9\5\6\4\3\o\i\j\5\1\n\3\y\u\n\p\o\p\6\6\l\f\n\w\c\2\c\o\s\p\c\r\x\n\d\9\3\d\o\n\s\2\o\l\2\h\6\z\d\l\n\2\3\n\e\o\4\3\c\n\h\5\g\8\d\6\v\o\q\m\m\5\y\c\n\3\l\q\1\4\b\7\1\l\5\w\v\x\w\6\c\e\f\n\x\6\s\9\6\f\k\q\l\y\l\a\b\2\u\8\x\5\0\i\d\s\3\2\z\p\1\r\r\5\p\8\b\y\v\r\k\3\y\t\g\2\h\k\5\s\j\2\u\y\w\6\m\h\2\9\4\7\x\g\7\t\7\2\u\y\l\z\s\g\o\1\j\m\g\c\5\s\3\1\l\l\n\i\c\d\h\f\m\o\l\c\m\v\5\r\n\g\y\3\7\s\q\w\5\5\3\s\t\t\l\3\v\j\b\o\i\o\5\v\i\x\p\t\l\p\c\j\q\l\f\8\y\k\r\1\n\y\p\0\h\6\5\g\u\c\k\q\m\x\a\4\d\m\0\l\8\h\u\t\6\n\a\l\d\j\i\o\i\u\s\x\8\2\6\q\2\i\0\c\p\r\v\d\b\c\o\7\t\3\r\d\z\x\2\0\j\3\u\4\w\q\a\q\0\c\w\5\u\t\k\7\i\s\j\0\g\e\5\m\w\r\u\s\p\r\9\7\6\h\o\a\r\m\4\z\6\o\3\u\3\0\8\y\3\4\j\z\m\v\8\o\1\d\j\k\l\w\w\z\6\6\s\j\6\2\j\i\a\b\q\y\f\m\q\e\m\1\d\d\z\b\y\6\7\u\0\z\m\8\2\r\x\3\f\g\r\h\0\j\8\h\b\w\o\o\t\y\j\t\l\9\u\s\z\u\0\a\n\2\m\a\n\1\o\o\d\n\r\j\3\c\6\6\6\j\6\8\8\g\d\e\5\0\a\o\5\3\t\p\z\1\t\z\y\a\p\0\c\v\v\q\x\4\x\6\c\v\y\n\s\y\p\c\t\c\y\v\v\f\2\c\j\x\f\f\0\w\i\5\6\2\i\5\8\f\x\3\x\x\d\t\k\2\i\s\o\k\m\k\3\i\x\t\x\5\i\p\b\c\r\b\4\e\o\x\u\q\i\r\2\v\k\z\f\j\1\t\d\p\4\e\t\m\j\a\b\a\o\i\2\r\i\d\2\h\v\v\3\t\4\y\r\i\m\i\6\k\t\5\t\v\f\r\8\9\d\8\2\2\w\y\i\1\3\d\m\9\j\b\f\e\o\f\2\b\j\7\e\f\1\i\7\g\7\d\q\y\2\d\f\w\t\j\w\3\f\y\n\i\k\w\b\6\c\v\3\7\l\z\q\w\d\n\x\o\g\x\v\b\k\q\g\r\7\z\a\9\v\w\v\q\x\2\z\y\c\r\w\7\y\6\7\6\e\9\e\h\c\n\w\c\z\4\j\t\v\p\s\q\n\m\n\3\x\c\p\g\5\j\s\f\d\s\0\g\6\d\k\4\8\u\4\d\c\b\i\1\b\d\c\f\g\p\5\k\q\f\v\c\q\6\i\2\k\i\l\y\h\4\3\e\m\p\z\s\t\l\a\g\0\f\7\8\9\t\0\q\9\d\g\i\y\r\y\2\r\f\l\b\6\h\p\7\h\i\n\p\a\k\v\6\v\i\7\t\8\3\3\j\8\o\s\z\m\0\d\l\g\i\x\o\l\3\s\u\6\4\n\8\3\u\7\u\y\l\f\9\i\i\7\3\5\4\2\c\s\n\8\e\m\j\2\3\0\5\b\b\o\e\u\2\u\r\n\w\x\2\t\p\o\e\m\2\m\4\v\6\a\s\p\v\y\q\o\l\v\b\c\h\w\j\4\t\p\i\m\p\d\7\m\f\f\q\d\z\d\4\o\f\k\0\1\8\f\5\a\1\9\8\n\v\a\4\y\f\7\g\d\e\2\g\q\6\w\4\t\m\v\p\x\s\d\c\c\v\y\l\o\5\a\h\p\x\i\w\h\b\4\w\2\q\e\p\6\n\x\t\s\g\2\p\v\3\v\j\i\5\s\h\i\n\p\j\g\2\g\2\s\d\1\1\q\e\n\7\p\d\z\a\e\c\0\s\p\6\4\2\y\4\6\v\d\8\6\m\7\8\3\l\b\u\p\z\x\m\o\s\z\7\j\7\7\j\7\8\n\m\z\i\r\g\1\3\s\n\x\p\g\9\c\w\p\e\i\i\0\z\z\n\x\h\t\t\b\x\u\j\d\i\8\n\m\d\g\h\j\p\r\l\u\k\r\t\3\s\k\i\w\v\j\j\0\3\7\v\6\3\8\2\i\6\7\f\b\l\v\a\h\6\8\d\x\j\i\c\6\y\b\z\w\7\g\s\t\2\g\1\l\w\s\l\t\c\s\j\a\2\w\w\q\x\i\4\v\1\n\a\m\7\b\j\3\u\i\f\b\n\8\y\i\x\d\s\6\z\5\8\o\t\6\y\1\i\s\h\6\i\y\e\l\2\e\q\g\0\o\0\k\1\q\a\3\2\i\d\q\2\w\m\y\y\y\r\u\o\y\b\f\5\o\c\6\p\s\a\g\0\p\c\0\9\l\g\n\g\u\k\i\i\c\i\w\5\t\2\w\d\q\t\o\f\h\0\g\c\b\4\3\t\f\1\i\i\7\2\j\5\t\1\p\9\2\f\y\6\o\q\k\1\t\m\3\7\0\e\7\t\4\n\1\k\3\g\8\s\q\b\t\g\2\t\6\o\2\v\k\5\2\4\q\o\d\l\v\z\q\t\8\m\k\k\q\a\c\c\s\n\8\o\1\e\z\m\r\0\n\n\4\n\3\k\5\q\s\0\b\f\j\1\q\c\l\0\n\5\g\2\p\a\c\6\5\x\e\2\k\b\l\a\j\j\d\z\m\8\w\1\k\5\2\3\8\h\g\2\g\u\y\q\j\m\8\k\1\n\2\a\y\l\v\0\z\b\v\u\0\e\y\v\8\w\i\r\7\g\2\8\h\f\2\v\y\n\f\u\e\y\g\r\x\n\p\a\5\t\a\0\h\t\l\e\0\a\o\g\z\s\i\l\r\h\r\w\w\i\b\3\2\6\j\5\a\0\9\6\3\e\s\5\w\s\c\0\q\7\s\9\c\u\r\q\g\b\s\y\l\5\c\u\g\g\i\h\x\s\v\1\6\t\w\m\1\8\d\y\g\l\c\c\z\7\q\0\6\1\t\l\p\l\9\d\w\l\v\5\q\l\e\v\j\p\3\2\z\u\m\r\i\e\i\a\e\t\d\o\t\0\v\m\3\9\6\z\e\i\b\1\u\g\3\w\d\r\k\w\b\1\r\0\b\h\u\e\2\z\0\7\y\0\x\t\8\x\m\f\c\u\n\0\2\j\t\0\f\m\l\8\1\p\8\q\4\f\9\v\6\b\l\g\z\2\g\r\n\8\i\l\o\0\3\o\v\x\p\w\4\1\d\v\r\8\a\y\o\w\v\k\n\w\v\1\h\v\c\z\v\k\o\0\u\m\0\v\w\m\i\9\3\9\k\w\l\b\v\s\a\o\r\8\1\t\m\b\8\h\i\2\a\u\b\5\1\f\5\5\5\i\a\b\n\e\c\x\h\c\z\g\h\3\y\j\m\7\7\i\x\d\y\v\j\y\f\q\r\q\w\j\u\v\c\9\l\a\i\2\x\s\6\r\q\9\z\u\e\u\r\c\p\4\8\j\2\z\m\a\5\o\4\l\y\0\3\t\i\u\u\x\8\u\u\y\g\5\w\i\d\x\d\z\6\3\p\g\l\e\m\7\f\2\d\d\h\a\2\0\7\s\7\y\x\6\m\3\h\q\m\a\p\n\7\n\c\m\f\d\l\x\a\9\k\s\8\q\l\m\5\5\1\c\o\6\p\u\r\n\p\0\v\7\9\b\w\8\7\p\n\r\a\x\f\s\t\5\d\4\k\q\e\b\9\2\r\5\i\m\y\5\2\7\x\i\9\b\x\z\8\e\a\7\v\k\f\p\g\8\j\0\b\u\j\m\w\w\n\2\b\6\9\e\c\g\n\4\2\w\9\x\q\0\u\7\7\d\p\k\d\s\0\1\7\n\1\3\9\r\v\q\r\y\a\k\z\r\0\l\z\r\c\4\i\g\0\o\9\w\d\q\g\e\p\8\y\b\u\o\6\d\b\v\v\2\r\s\z\m\s\y\4\a\o\x\0\3\x\w\m\0\l\i\v\p\u\z\l\c\k\w\w\c\6\c\o\j\q\1\z\u\h\j\s\e\f\b\0\2\j\f\u\s\w\h\v\j\v\5\x\8\u\m\u\v\l\b\v\b\5\x\o\y\q\l\c\9\d\i\9\p\k\0\3\e\z\1\z\j\b\p\e\z\3\r\c\2\1\o\z\8\x\f\u\h\b\b\q\e\s\y\5\z\m\a\g\s\u\5\u\6\y\r\w\d\q\x\1\v\p\2\p\a\l\n\d\r\o\w\p\r\f\p\k\h\u\x\j\r\l\s\n\f\4\t\d\u\k\8\g\z\s\h\y\v\f\v\j\5\0\5\b\x\4\l\k\p\1\t\i\1\4\1\t\p\f\9\s\t\6\u\c\t\9\m\5\5\s\5\6\9\o\2\f\5\l\n\3\f\d\z\4\3\b\9\d\l\7\2\4\6\a\l\2\y\i\2\0\v\6\p\g\f\r\6\4\l\x\1\u\d\5\k\s\k\2\c\v\f\b\f\v\b\j\y\9\4\p\l\8\l\t\w\i\y\f\x\o\5\o\2\v\r\7\l\u\n\w\9\q\2\j\r\b\e\m\s\a\n\z\r\e\a\j\j\r\j\c\a\o\o\o\5\q\6\x\k\7\0\j\r\w\c\7\h\z\9\l\4\1\4\n\w\j\g\i\s\l\d\z\8\0\7\t\i\9\p\b\h\m\x\9\l\c\u\1\j\r\m\i\q\a\d\x\a\8\i\v\w\b\h\i\f\4\i\8\k\h\k\3\a\v\k\o\t\0\r\5\8\w\f\u\c\g\k\o\t\2\n\9\n\8\p\3\g\d\y\x\q\2\e\u\l\y\6\c ]] 00:06:03.871 00:06:03.871 real 0m1.321s 00:06:03.871 user 0m0.932s 00:06:03.871 sys 0m0.538s 00:06:03.871 ************************************ 00:06:03.871 END TEST dd_rw_offset 00:06:03.871 ************************************ 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.871 19:44:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.871 { 00:06:03.871 "subsystems": [ 00:06:03.871 { 00:06:03.871 "subsystem": "bdev", 00:06:03.871 "config": [ 00:06:03.871 { 00:06:03.871 "params": { 00:06:03.871 "trtype": "pcie", 00:06:03.871 "traddr": "0000:00:10.0", 00:06:03.871 "name": "Nvme0" 00:06:03.871 }, 00:06:03.871 "method": "bdev_nvme_attach_controller" 00:06:03.871 }, 00:06:03.871 { 00:06:03.871 "method": "bdev_wait_for_examine" 00:06:03.871 } 00:06:03.871 ] 00:06:03.871 } 00:06:03.871 ] 00:06:03.871 } 00:06:04.130 [2024-07-24 19:44:32.535834] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:04.130 [2024-07-24 19:44:32.535929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62208 ] 00:06:04.130 [2024-07-24 19:44:32.678697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.388 [2024-07-24 19:44:32.801943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.388 [2024-07-24 19:44:32.850574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.953  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:04.953 00:06:04.953 19:44:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.953 ************************************ 00:06:04.953 END TEST spdk_dd_basic_rw 00:06:04.953 ************************************ 00:06:04.953 00:06:04.953 real 0m23.720s 00:06:04.953 user 0m16.859s 00:06:04.953 sys 0m9.466s 00:06:04.953 19:44:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.953 19:44:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.953 19:44:33 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:04.953 19:44:33 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.953 19:44:33 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.953 19:44:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:04.953 ************************************ 00:06:04.953 START TEST spdk_dd_posix 00:06:04.953 ************************************ 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:04.953 * Looking for test storage... 00:06:04.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:04.953 * First test run, liburing in use 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:04.953 ************************************ 00:06:04.953 START TEST dd_flag_append 00:06:04.953 ************************************ 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=mfgawf48vnz4r52hp3q062srhjyajw91 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=rub9t7z72ffi4104rzduyt56566pj72c 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s mfgawf48vnz4r52hp3q062srhjyajw91 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s rub9t7z72ffi4104rzduyt56566pj72c 00:06:04.953 19:44:33 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:04.953 [2024-07-24 19:44:33.561177] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:04.953 [2024-07-24 19:44:33.561317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62272 ] 00:06:05.210 [2024-07-24 19:44:33.706150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.467 [2024-07-24 19:44:33.890400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.467 [2024-07-24 19:44:33.973420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.725  Copying: 32/32 [B] (average 31 kBps) 00:06:05.725 00:06:05.725 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ rub9t7z72ffi4104rzduyt56566pj72cmfgawf48vnz4r52hp3q062srhjyajw91 == \r\u\b\9\t\7\z\7\2\f\f\i\4\1\0\4\r\z\d\u\y\t\5\6\5\6\6\p\j\7\2\c\m\f\g\a\w\f\4\8\v\n\z\4\r\5\2\h\p\3\q\0\6\2\s\r\h\j\y\a\j\w\9\1 ]] 00:06:05.725 00:06:05.725 real 0m0.854s 00:06:05.725 user 0m0.515s 00:06:05.725 sys 0m0.419s 00:06:05.725 ************************************ 00:06:05.725 END TEST dd_flag_append 00:06:05.725 ************************************ 00:06:05.725 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.725 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:05.983 ************************************ 00:06:05.983 START TEST dd_flag_directory 00:06:05.983 ************************************ 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:05.983 19:44:34 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.983 [2024-07-24 19:44:34.470911] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:05.983 [2024-07-24 19:44:34.471268] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62306 ] 00:06:05.983 [2024-07-24 19:44:34.611549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.253 [2024-07-24 19:44:34.766103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.253 [2024-07-24 19:44:34.846571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:06.253 [2024-07-24 19:44:34.896869] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:06.253 [2024-07-24 19:44:34.896939] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:06.253 [2024-07-24 19:44:34.896970] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.511 [2024-07-24 19:44:35.071551] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:06.769 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:06.769 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.769 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:06.769 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:06.769 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:06.769 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.769 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:06.769 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:06.769 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:06.769 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.770 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.770 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.770 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.770 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.770 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.770 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.770 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:06.770 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:06.770 [2024-07-24 19:44:35.280571] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:06.770 [2024-07-24 19:44:35.280706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62320 ] 00:06:06.770 [2024-07-24 19:44:35.433050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.027 [2024-07-24 19:44:35.550981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.027 [2024-07-24 19:44:35.598606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.027 [2024-07-24 19:44:35.630293] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:07.028 [2024-07-24 19:44:35.630353] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:07.028 [2024-07-24 19:44:35.630372] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.286 [2024-07-24 19:44:35.755187] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:07.286 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:07.286 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.286 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:07.286 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:07.286 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:07.286 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.286 00:06:07.286 real 0m1.493s 00:06:07.286 user 0m0.910s 00:06:07.286 sys 0m0.369s 00:06:07.286 ************************************ 00:06:07.286 END TEST dd_flag_directory 00:06:07.286 ************************************ 00:06:07.286 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.286 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:07.544 19:44:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:07.544 19:44:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.544 19:44:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.544 19:44:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:07.544 ************************************ 00:06:07.544 START TEST dd_flag_nofollow 00:06:07.544 ************************************ 00:06:07.544 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:06:07.544 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:07.545 19:44:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.545 [2024-07-24 19:44:36.030040] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:07.545 [2024-07-24 19:44:36.030138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62344 ] 00:06:07.545 [2024-07-24 19:44:36.173783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.803 [2024-07-24 19:44:36.288973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.803 [2024-07-24 19:44:36.332316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.803 [2024-07-24 19:44:36.360648] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:07.803 [2024-07-24 19:44:36.360696] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:07.803 [2024-07-24 19:44:36.360711] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.803 [2024-07-24 19:44:36.454716] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:08.062 19:44:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:08.062 [2024-07-24 19:44:36.599764] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:08.062 [2024-07-24 19:44:36.600038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62359 ] 00:06:08.320 [2024-07-24 19:44:36.739897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.320 [2024-07-24 19:44:36.842652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.320 [2024-07-24 19:44:36.885786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.320 [2024-07-24 19:44:36.914827] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:08.320 [2024-07-24 19:44:36.914880] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:08.320 [2024-07-24 19:44:36.914899] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.579 [2024-07-24 19:44:37.009062] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:08.579 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:08.579 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.579 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:08.579 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:08.579 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:08.579 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.579 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:08.579 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:08.579 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:08.579 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.579 [2024-07-24 19:44:37.147920] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:08.579 [2024-07-24 19:44:37.148010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62367 ] 00:06:08.837 [2024-07-24 19:44:37.278547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.837 [2024-07-24 19:44:37.382161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.837 [2024-07-24 19:44:37.425150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.096  Copying: 512/512 [B] (average 500 kBps) 00:06:09.096 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ zrity70ojic9ssrj5aaw5fxj70sha0r0nsvjfvtzklzpi0gi92lx1puxr8pt4samm4owmwmdas9mdaspzi2q9w33hi1e94hzokxit8zjl9grgwwu3nuakio71oqtyec1zhzgou0qbt4vx3h85ayiroygerssdfr34ht6bq421b4bsb8mmfce6adp8v0lqx27p01pymbc850ll3esk2m7vpjd8lld2n38726fha7iitdyjre8ldlugq5jsud6r2mg92cdokd0qu066umar62udkm58086lkz8zys0qnexuulclxy10lt24994yo5zc7o01q6z0xgbzwkmedlcz80mtmy6rtug6coffnzqqqiyjrp5pp3v3g1i97tvflrkrp4lkdjcf5yg7zp1gz73ru856cfm784z5h45pfplyeo4bg39sjg4u1b4sduaah6bnlim3rj8dacn6lyxlybxxxd6rewy29y6dl7fl7h7l35zcqwet6c2zcm4qa5nxlileh2b == \z\r\i\t\y\7\0\o\j\i\c\9\s\s\r\j\5\a\a\w\5\f\x\j\7\0\s\h\a\0\r\0\n\s\v\j\f\v\t\z\k\l\z\p\i\0\g\i\9\2\l\x\1\p\u\x\r\8\p\t\4\s\a\m\m\4\o\w\m\w\m\d\a\s\9\m\d\a\s\p\z\i\2\q\9\w\3\3\h\i\1\e\9\4\h\z\o\k\x\i\t\8\z\j\l\9\g\r\g\w\w\u\3\n\u\a\k\i\o\7\1\o\q\t\y\e\c\1\z\h\z\g\o\u\0\q\b\t\4\v\x\3\h\8\5\a\y\i\r\o\y\g\e\r\s\s\d\f\r\3\4\h\t\6\b\q\4\2\1\b\4\b\s\b\8\m\m\f\c\e\6\a\d\p\8\v\0\l\q\x\2\7\p\0\1\p\y\m\b\c\8\5\0\l\l\3\e\s\k\2\m\7\v\p\j\d\8\l\l\d\2\n\3\8\7\2\6\f\h\a\7\i\i\t\d\y\j\r\e\8\l\d\l\u\g\q\5\j\s\u\d\6\r\2\m\g\9\2\c\d\o\k\d\0\q\u\0\6\6\u\m\a\r\6\2\u\d\k\m\5\8\0\8\6\l\k\z\8\z\y\s\0\q\n\e\x\u\u\l\c\l\x\y\1\0\l\t\2\4\9\9\4\y\o\5\z\c\7\o\0\1\q\6\z\0\x\g\b\z\w\k\m\e\d\l\c\z\8\0\m\t\m\y\6\r\t\u\g\6\c\o\f\f\n\z\q\q\q\i\y\j\r\p\5\p\p\3\v\3\g\1\i\9\7\t\v\f\l\r\k\r\p\4\l\k\d\j\c\f\5\y\g\7\z\p\1\g\z\7\3\r\u\8\5\6\c\f\m\7\8\4\z\5\h\4\5\p\f\p\l\y\e\o\4\b\g\3\9\s\j\g\4\u\1\b\4\s\d\u\a\a\h\6\b\n\l\i\m\3\r\j\8\d\a\c\n\6\l\y\x\l\y\b\x\x\x\d\6\r\e\w\y\2\9\y\6\d\l\7\f\l\7\h\7\l\3\5\z\c\q\w\e\t\6\c\2\z\c\m\4\q\a\5\n\x\l\i\l\e\h\2\b ]] 00:06:09.096 00:06:09.096 real 0m1.679s 00:06:09.096 user 0m0.964s 00:06:09.096 sys 0m0.501s 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.096 ************************************ 00:06:09.096 END TEST dd_flag_nofollow 00:06:09.096 ************************************ 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:09.096 ************************************ 00:06:09.096 START TEST dd_flag_noatime 00:06:09.096 ************************************ 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721850277 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721850277 00:06:09.096 19:44:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:10.472 19:44:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.472 [2024-07-24 19:44:38.770089] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:10.472 [2024-07-24 19:44:38.770167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62409 ] 00:06:10.472 [2024-07-24 19:44:38.910418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.472 [2024-07-24 19:44:39.028707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.472 [2024-07-24 19:44:39.077424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.731  Copying: 512/512 [B] (average 500 kBps) 00:06:10.731 00:06:10.731 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:10.731 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721850277 )) 00:06:10.731 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.731 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721850277 )) 00:06:10.731 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.731 [2024-07-24 19:44:39.372639] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:10.731 [2024-07-24 19:44:39.372754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62423 ] 00:06:10.989 [2024-07-24 19:44:39.516763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.989 [2024-07-24 19:44:39.619785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.247 [2024-07-24 19:44:39.663414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.247  Copying: 512/512 [B] (average 500 kBps) 00:06:11.247 00:06:11.247 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:11.247 ************************************ 00:06:11.247 END TEST dd_flag_noatime 00:06:11.247 ************************************ 00:06:11.247 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721850279 )) 00:06:11.247 00:06:11.247 real 0m2.186s 00:06:11.247 user 0m0.686s 00:06:11.247 sys 0m0.503s 00:06:11.247 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.247 19:44:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:11.505 ************************************ 00:06:11.505 START TEST dd_flags_misc 00:06:11.505 ************************************ 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.505 19:44:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:11.505 [2024-07-24 19:44:40.018645] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:11.505 [2024-07-24 19:44:40.018744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62451 ] 00:06:11.505 [2024-07-24 19:44:40.168760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.764 [2024-07-24 19:44:40.302825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.764 [2024-07-24 19:44:40.389480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.281  Copying: 512/512 [B] (average 500 kBps) 00:06:12.281 00:06:12.281 19:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ s9r1esnmm4xcmg5gq0uhc03lapgckbmnc76neuku0469xeqmxavs45zk1n70utv1v7vvrvynmv708as6iky6bddc4bpqh92s2248f396nkzqdw9bal7p2vmuv95rdke7y0nwh9iqlbnm6pp78vm0igmup17r7kyxe1imvqueg2n6wchmbt53qybnvmgjmoy2e6wnxxmt7ybsid6rcg02gxy39uobs7wguj6pk937wy9mal4re3zm5t8dzfktw425svs074to3cj591m7cf1u87q15bfcn3mac2ev83yhcth7oh6zlsjdwlj8wryittirurjugqnk618c7rucfah2vb0vogy2kx3rdrk0mghme1mldijswcvhekpa5nfq0d713by3k7p30zennhi1gs6kz0kkxj8c2ip1inwy60rk6vn88jwe6ac2i0nqn5w4vaw3j18hu007eqtufiqkuum4yfs7j9gc7xgu3b6fehr5si39ap6jnvr6c3hm243pvre8 == \s\9\r\1\e\s\n\m\m\4\x\c\m\g\5\g\q\0\u\h\c\0\3\l\a\p\g\c\k\b\m\n\c\7\6\n\e\u\k\u\0\4\6\9\x\e\q\m\x\a\v\s\4\5\z\k\1\n\7\0\u\t\v\1\v\7\v\v\r\v\y\n\m\v\7\0\8\a\s\6\i\k\y\6\b\d\d\c\4\b\p\q\h\9\2\s\2\2\4\8\f\3\9\6\n\k\z\q\d\w\9\b\a\l\7\p\2\v\m\u\v\9\5\r\d\k\e\7\y\0\n\w\h\9\i\q\l\b\n\m\6\p\p\7\8\v\m\0\i\g\m\u\p\1\7\r\7\k\y\x\e\1\i\m\v\q\u\e\g\2\n\6\w\c\h\m\b\t\5\3\q\y\b\n\v\m\g\j\m\o\y\2\e\6\w\n\x\x\m\t\7\y\b\s\i\d\6\r\c\g\0\2\g\x\y\3\9\u\o\b\s\7\w\g\u\j\6\p\k\9\3\7\w\y\9\m\a\l\4\r\e\3\z\m\5\t\8\d\z\f\k\t\w\4\2\5\s\v\s\0\7\4\t\o\3\c\j\5\9\1\m\7\c\f\1\u\8\7\q\1\5\b\f\c\n\3\m\a\c\2\e\v\8\3\y\h\c\t\h\7\o\h\6\z\l\s\j\d\w\l\j\8\w\r\y\i\t\t\i\r\u\r\j\u\g\q\n\k\6\1\8\c\7\r\u\c\f\a\h\2\v\b\0\v\o\g\y\2\k\x\3\r\d\r\k\0\m\g\h\m\e\1\m\l\d\i\j\s\w\c\v\h\e\k\p\a\5\n\f\q\0\d\7\1\3\b\y\3\k\7\p\3\0\z\e\n\n\h\i\1\g\s\6\k\z\0\k\k\x\j\8\c\2\i\p\1\i\n\w\y\6\0\r\k\6\v\n\8\8\j\w\e\6\a\c\2\i\0\n\q\n\5\w\4\v\a\w\3\j\1\8\h\u\0\0\7\e\q\t\u\f\i\q\k\u\u\m\4\y\f\s\7\j\9\g\c\7\x\g\u\3\b\6\f\e\h\r\5\s\i\3\9\a\p\6\j\n\v\r\6\c\3\h\m\2\4\3\p\v\r\e\8 ]] 00:06:12.281 19:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.282 19:44:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:12.282 [2024-07-24 19:44:40.816649] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:12.282 [2024-07-24 19:44:40.816753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62466 ] 00:06:12.570 [2024-07-24 19:44:40.960754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.570 [2024-07-24 19:44:41.127825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.570 [2024-07-24 19:44:41.211331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.084  Copying: 512/512 [B] (average 500 kBps) 00:06:13.084 00:06:13.084 19:44:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ s9r1esnmm4xcmg5gq0uhc03lapgckbmnc76neuku0469xeqmxavs45zk1n70utv1v7vvrvynmv708as6iky6bddc4bpqh92s2248f396nkzqdw9bal7p2vmuv95rdke7y0nwh9iqlbnm6pp78vm0igmup17r7kyxe1imvqueg2n6wchmbt53qybnvmgjmoy2e6wnxxmt7ybsid6rcg02gxy39uobs7wguj6pk937wy9mal4re3zm5t8dzfktw425svs074to3cj591m7cf1u87q15bfcn3mac2ev83yhcth7oh6zlsjdwlj8wryittirurjugqnk618c7rucfah2vb0vogy2kx3rdrk0mghme1mldijswcvhekpa5nfq0d713by3k7p30zennhi1gs6kz0kkxj8c2ip1inwy60rk6vn88jwe6ac2i0nqn5w4vaw3j18hu007eqtufiqkuum4yfs7j9gc7xgu3b6fehr5si39ap6jnvr6c3hm243pvre8 == \s\9\r\1\e\s\n\m\m\4\x\c\m\g\5\g\q\0\u\h\c\0\3\l\a\p\g\c\k\b\m\n\c\7\6\n\e\u\k\u\0\4\6\9\x\e\q\m\x\a\v\s\4\5\z\k\1\n\7\0\u\t\v\1\v\7\v\v\r\v\y\n\m\v\7\0\8\a\s\6\i\k\y\6\b\d\d\c\4\b\p\q\h\9\2\s\2\2\4\8\f\3\9\6\n\k\z\q\d\w\9\b\a\l\7\p\2\v\m\u\v\9\5\r\d\k\e\7\y\0\n\w\h\9\i\q\l\b\n\m\6\p\p\7\8\v\m\0\i\g\m\u\p\1\7\r\7\k\y\x\e\1\i\m\v\q\u\e\g\2\n\6\w\c\h\m\b\t\5\3\q\y\b\n\v\m\g\j\m\o\y\2\e\6\w\n\x\x\m\t\7\y\b\s\i\d\6\r\c\g\0\2\g\x\y\3\9\u\o\b\s\7\w\g\u\j\6\p\k\9\3\7\w\y\9\m\a\l\4\r\e\3\z\m\5\t\8\d\z\f\k\t\w\4\2\5\s\v\s\0\7\4\t\o\3\c\j\5\9\1\m\7\c\f\1\u\8\7\q\1\5\b\f\c\n\3\m\a\c\2\e\v\8\3\y\h\c\t\h\7\o\h\6\z\l\s\j\d\w\l\j\8\w\r\y\i\t\t\i\r\u\r\j\u\g\q\n\k\6\1\8\c\7\r\u\c\f\a\h\2\v\b\0\v\o\g\y\2\k\x\3\r\d\r\k\0\m\g\h\m\e\1\m\l\d\i\j\s\w\c\v\h\e\k\p\a\5\n\f\q\0\d\7\1\3\b\y\3\k\7\p\3\0\z\e\n\n\h\i\1\g\s\6\k\z\0\k\k\x\j\8\c\2\i\p\1\i\n\w\y\6\0\r\k\6\v\n\8\8\j\w\e\6\a\c\2\i\0\n\q\n\5\w\4\v\a\w\3\j\1\8\h\u\0\0\7\e\q\t\u\f\i\q\k\u\u\m\4\y\f\s\7\j\9\g\c\7\x\g\u\3\b\6\f\e\h\r\5\s\i\3\9\a\p\6\j\n\v\r\6\c\3\h\m\2\4\3\p\v\r\e\8 ]] 00:06:13.084 19:44:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.084 19:44:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:13.084 [2024-07-24 19:44:41.639301] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:13.084 [2024-07-24 19:44:41.639397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62481 ] 00:06:13.342 [2024-07-24 19:44:41.777141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.342 [2024-07-24 19:44:41.956927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.599 [2024-07-24 19:44:42.043463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.857  Copying: 512/512 [B] (average 125 kBps) 00:06:13.857 00:06:13.857 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ s9r1esnmm4xcmg5gq0uhc03lapgckbmnc76neuku0469xeqmxavs45zk1n70utv1v7vvrvynmv708as6iky6bddc4bpqh92s2248f396nkzqdw9bal7p2vmuv95rdke7y0nwh9iqlbnm6pp78vm0igmup17r7kyxe1imvqueg2n6wchmbt53qybnvmgjmoy2e6wnxxmt7ybsid6rcg02gxy39uobs7wguj6pk937wy9mal4re3zm5t8dzfktw425svs074to3cj591m7cf1u87q15bfcn3mac2ev83yhcth7oh6zlsjdwlj8wryittirurjugqnk618c7rucfah2vb0vogy2kx3rdrk0mghme1mldijswcvhekpa5nfq0d713by3k7p30zennhi1gs6kz0kkxj8c2ip1inwy60rk6vn88jwe6ac2i0nqn5w4vaw3j18hu007eqtufiqkuum4yfs7j9gc7xgu3b6fehr5si39ap6jnvr6c3hm243pvre8 == \s\9\r\1\e\s\n\m\m\4\x\c\m\g\5\g\q\0\u\h\c\0\3\l\a\p\g\c\k\b\m\n\c\7\6\n\e\u\k\u\0\4\6\9\x\e\q\m\x\a\v\s\4\5\z\k\1\n\7\0\u\t\v\1\v\7\v\v\r\v\y\n\m\v\7\0\8\a\s\6\i\k\y\6\b\d\d\c\4\b\p\q\h\9\2\s\2\2\4\8\f\3\9\6\n\k\z\q\d\w\9\b\a\l\7\p\2\v\m\u\v\9\5\r\d\k\e\7\y\0\n\w\h\9\i\q\l\b\n\m\6\p\p\7\8\v\m\0\i\g\m\u\p\1\7\r\7\k\y\x\e\1\i\m\v\q\u\e\g\2\n\6\w\c\h\m\b\t\5\3\q\y\b\n\v\m\g\j\m\o\y\2\e\6\w\n\x\x\m\t\7\y\b\s\i\d\6\r\c\g\0\2\g\x\y\3\9\u\o\b\s\7\w\g\u\j\6\p\k\9\3\7\w\y\9\m\a\l\4\r\e\3\z\m\5\t\8\d\z\f\k\t\w\4\2\5\s\v\s\0\7\4\t\o\3\c\j\5\9\1\m\7\c\f\1\u\8\7\q\1\5\b\f\c\n\3\m\a\c\2\e\v\8\3\y\h\c\t\h\7\o\h\6\z\l\s\j\d\w\l\j\8\w\r\y\i\t\t\i\r\u\r\j\u\g\q\n\k\6\1\8\c\7\r\u\c\f\a\h\2\v\b\0\v\o\g\y\2\k\x\3\r\d\r\k\0\m\g\h\m\e\1\m\l\d\i\j\s\w\c\v\h\e\k\p\a\5\n\f\q\0\d\7\1\3\b\y\3\k\7\p\3\0\z\e\n\n\h\i\1\g\s\6\k\z\0\k\k\x\j\8\c\2\i\p\1\i\n\w\y\6\0\r\k\6\v\n\8\8\j\w\e\6\a\c\2\i\0\n\q\n\5\w\4\v\a\w\3\j\1\8\h\u\0\0\7\e\q\t\u\f\i\q\k\u\u\m\4\y\f\s\7\j\9\g\c\7\x\g\u\3\b\6\f\e\h\r\5\s\i\3\9\a\p\6\j\n\v\r\6\c\3\h\m\2\4\3\p\v\r\e\8 ]] 00:06:13.857 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.857 19:44:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:13.857 [2024-07-24 19:44:42.503165] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:13.857 [2024-07-24 19:44:42.503270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62491 ] 00:06:14.115 [2024-07-24 19:44:42.644878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.373 [2024-07-24 19:44:42.832417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.373 [2024-07-24 19:44:42.926118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.936  Copying: 512/512 [B] (average 250 kBps) 00:06:14.936 00:06:14.936 19:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ s9r1esnmm4xcmg5gq0uhc03lapgckbmnc76neuku0469xeqmxavs45zk1n70utv1v7vvrvynmv708as6iky6bddc4bpqh92s2248f396nkzqdw9bal7p2vmuv95rdke7y0nwh9iqlbnm6pp78vm0igmup17r7kyxe1imvqueg2n6wchmbt53qybnvmgjmoy2e6wnxxmt7ybsid6rcg02gxy39uobs7wguj6pk937wy9mal4re3zm5t8dzfktw425svs074to3cj591m7cf1u87q15bfcn3mac2ev83yhcth7oh6zlsjdwlj8wryittirurjugqnk618c7rucfah2vb0vogy2kx3rdrk0mghme1mldijswcvhekpa5nfq0d713by3k7p30zennhi1gs6kz0kkxj8c2ip1inwy60rk6vn88jwe6ac2i0nqn5w4vaw3j18hu007eqtufiqkuum4yfs7j9gc7xgu3b6fehr5si39ap6jnvr6c3hm243pvre8 == \s\9\r\1\e\s\n\m\m\4\x\c\m\g\5\g\q\0\u\h\c\0\3\l\a\p\g\c\k\b\m\n\c\7\6\n\e\u\k\u\0\4\6\9\x\e\q\m\x\a\v\s\4\5\z\k\1\n\7\0\u\t\v\1\v\7\v\v\r\v\y\n\m\v\7\0\8\a\s\6\i\k\y\6\b\d\d\c\4\b\p\q\h\9\2\s\2\2\4\8\f\3\9\6\n\k\z\q\d\w\9\b\a\l\7\p\2\v\m\u\v\9\5\r\d\k\e\7\y\0\n\w\h\9\i\q\l\b\n\m\6\p\p\7\8\v\m\0\i\g\m\u\p\1\7\r\7\k\y\x\e\1\i\m\v\q\u\e\g\2\n\6\w\c\h\m\b\t\5\3\q\y\b\n\v\m\g\j\m\o\y\2\e\6\w\n\x\x\m\t\7\y\b\s\i\d\6\r\c\g\0\2\g\x\y\3\9\u\o\b\s\7\w\g\u\j\6\p\k\9\3\7\w\y\9\m\a\l\4\r\e\3\z\m\5\t\8\d\z\f\k\t\w\4\2\5\s\v\s\0\7\4\t\o\3\c\j\5\9\1\m\7\c\f\1\u\8\7\q\1\5\b\f\c\n\3\m\a\c\2\e\v\8\3\y\h\c\t\h\7\o\h\6\z\l\s\j\d\w\l\j\8\w\r\y\i\t\t\i\r\u\r\j\u\g\q\n\k\6\1\8\c\7\r\u\c\f\a\h\2\v\b\0\v\o\g\y\2\k\x\3\r\d\r\k\0\m\g\h\m\e\1\m\l\d\i\j\s\w\c\v\h\e\k\p\a\5\n\f\q\0\d\7\1\3\b\y\3\k\7\p\3\0\z\e\n\n\h\i\1\g\s\6\k\z\0\k\k\x\j\8\c\2\i\p\1\i\n\w\y\6\0\r\k\6\v\n\8\8\j\w\e\6\a\c\2\i\0\n\q\n\5\w\4\v\a\w\3\j\1\8\h\u\0\0\7\e\q\t\u\f\i\q\k\u\u\m\4\y\f\s\7\j\9\g\c\7\x\g\u\3\b\6\f\e\h\r\5\s\i\3\9\a\p\6\j\n\v\r\6\c\3\h\m\2\4\3\p\v\r\e\8 ]] 00:06:14.936 19:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:14.936 19:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:14.936 19:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:14.936 19:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:14.936 19:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.937 19:44:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:14.937 [2024-07-24 19:44:43.397918] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:14.937 [2024-07-24 19:44:43.398042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62506 ] 00:06:14.937 [2024-07-24 19:44:43.531372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.193 [2024-07-24 19:44:43.710814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.193 [2024-07-24 19:44:43.804533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.707  Copying: 512/512 [B] (average 500 kBps) 00:06:15.707 00:06:15.707 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ a7tbmzfw0kg8h1vbqk4a4n6pud8uw6uxvy18h4fbs0g9m871hjmf4uz52lno9vq0krk1cbtw3f0mml373kzgzbms5gtnmym2a5zxvykiwrg97wymt9biubf2s77xdaqrdugd14fhyir0r66uo4ekrw7vbki3z5s4y257kc9s1qdngm2qole2flyhyh7q6h7khllbaw4pnl0flcceu81psv7qigpzu6vr3c6edbo9vjli0apjef0zyh5mbgvd83wk4hdoh8fz9u8x35rr905suvmxua4tu8zcue379pfeoxtb461j7i79rdv948gvkyutlhzxed8htmuqvejkqz5vqsovjz09wksz02d0yod4srggpglhcrlq4stjq0sqabyz12snx452zpklcz0ka70rfdilvcradgi0fsn9j2sqdurn606m1u96oyl8zkdxehtxbv67p0xkcj5kpt6xc0b6nf2akvujmmwfsorjidr67myac63v9fw41bnxoewf2eon == \a\7\t\b\m\z\f\w\0\k\g\8\h\1\v\b\q\k\4\a\4\n\6\p\u\d\8\u\w\6\u\x\v\y\1\8\h\4\f\b\s\0\g\9\m\8\7\1\h\j\m\f\4\u\z\5\2\l\n\o\9\v\q\0\k\r\k\1\c\b\t\w\3\f\0\m\m\l\3\7\3\k\z\g\z\b\m\s\5\g\t\n\m\y\m\2\a\5\z\x\v\y\k\i\w\r\g\9\7\w\y\m\t\9\b\i\u\b\f\2\s\7\7\x\d\a\q\r\d\u\g\d\1\4\f\h\y\i\r\0\r\6\6\u\o\4\e\k\r\w\7\v\b\k\i\3\z\5\s\4\y\2\5\7\k\c\9\s\1\q\d\n\g\m\2\q\o\l\e\2\f\l\y\h\y\h\7\q\6\h\7\k\h\l\l\b\a\w\4\p\n\l\0\f\l\c\c\e\u\8\1\p\s\v\7\q\i\g\p\z\u\6\v\r\3\c\6\e\d\b\o\9\v\j\l\i\0\a\p\j\e\f\0\z\y\h\5\m\b\g\v\d\8\3\w\k\4\h\d\o\h\8\f\z\9\u\8\x\3\5\r\r\9\0\5\s\u\v\m\x\u\a\4\t\u\8\z\c\u\e\3\7\9\p\f\e\o\x\t\b\4\6\1\j\7\i\7\9\r\d\v\9\4\8\g\v\k\y\u\t\l\h\z\x\e\d\8\h\t\m\u\q\v\e\j\k\q\z\5\v\q\s\o\v\j\z\0\9\w\k\s\z\0\2\d\0\y\o\d\4\s\r\g\g\p\g\l\h\c\r\l\q\4\s\t\j\q\0\s\q\a\b\y\z\1\2\s\n\x\4\5\2\z\p\k\l\c\z\0\k\a\7\0\r\f\d\i\l\v\c\r\a\d\g\i\0\f\s\n\9\j\2\s\q\d\u\r\n\6\0\6\m\1\u\9\6\o\y\l\8\z\k\d\x\e\h\t\x\b\v\6\7\p\0\x\k\c\j\5\k\p\t\6\x\c\0\b\6\n\f\2\a\k\v\u\j\m\m\w\f\s\o\r\j\i\d\r\6\7\m\y\a\c\6\3\v\9\f\w\4\1\b\n\x\o\e\w\f\2\e\o\n ]] 00:06:15.707 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.707 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:15.707 [2024-07-24 19:44:44.220592] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:15.707 [2024-07-24 19:44:44.220684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62515 ] 00:06:15.707 [2024-07-24 19:44:44.356367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.963 [2024-07-24 19:44:44.508845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.963 [2024-07-24 19:44:44.588418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.478  Copying: 512/512 [B] (average 500 kBps) 00:06:16.478 00:06:16.478 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ a7tbmzfw0kg8h1vbqk4a4n6pud8uw6uxvy18h4fbs0g9m871hjmf4uz52lno9vq0krk1cbtw3f0mml373kzgzbms5gtnmym2a5zxvykiwrg97wymt9biubf2s77xdaqrdugd14fhyir0r66uo4ekrw7vbki3z5s4y257kc9s1qdngm2qole2flyhyh7q6h7khllbaw4pnl0flcceu81psv7qigpzu6vr3c6edbo9vjli0apjef0zyh5mbgvd83wk4hdoh8fz9u8x35rr905suvmxua4tu8zcue379pfeoxtb461j7i79rdv948gvkyutlhzxed8htmuqvejkqz5vqsovjz09wksz02d0yod4srggpglhcrlq4stjq0sqabyz12snx452zpklcz0ka70rfdilvcradgi0fsn9j2sqdurn606m1u96oyl8zkdxehtxbv67p0xkcj5kpt6xc0b6nf2akvujmmwfsorjidr67myac63v9fw41bnxoewf2eon == \a\7\t\b\m\z\f\w\0\k\g\8\h\1\v\b\q\k\4\a\4\n\6\p\u\d\8\u\w\6\u\x\v\y\1\8\h\4\f\b\s\0\g\9\m\8\7\1\h\j\m\f\4\u\z\5\2\l\n\o\9\v\q\0\k\r\k\1\c\b\t\w\3\f\0\m\m\l\3\7\3\k\z\g\z\b\m\s\5\g\t\n\m\y\m\2\a\5\z\x\v\y\k\i\w\r\g\9\7\w\y\m\t\9\b\i\u\b\f\2\s\7\7\x\d\a\q\r\d\u\g\d\1\4\f\h\y\i\r\0\r\6\6\u\o\4\e\k\r\w\7\v\b\k\i\3\z\5\s\4\y\2\5\7\k\c\9\s\1\q\d\n\g\m\2\q\o\l\e\2\f\l\y\h\y\h\7\q\6\h\7\k\h\l\l\b\a\w\4\p\n\l\0\f\l\c\c\e\u\8\1\p\s\v\7\q\i\g\p\z\u\6\v\r\3\c\6\e\d\b\o\9\v\j\l\i\0\a\p\j\e\f\0\z\y\h\5\m\b\g\v\d\8\3\w\k\4\h\d\o\h\8\f\z\9\u\8\x\3\5\r\r\9\0\5\s\u\v\m\x\u\a\4\t\u\8\z\c\u\e\3\7\9\p\f\e\o\x\t\b\4\6\1\j\7\i\7\9\r\d\v\9\4\8\g\v\k\y\u\t\l\h\z\x\e\d\8\h\t\m\u\q\v\e\j\k\q\z\5\v\q\s\o\v\j\z\0\9\w\k\s\z\0\2\d\0\y\o\d\4\s\r\g\g\p\g\l\h\c\r\l\q\4\s\t\j\q\0\s\q\a\b\y\z\1\2\s\n\x\4\5\2\z\p\k\l\c\z\0\k\a\7\0\r\f\d\i\l\v\c\r\a\d\g\i\0\f\s\n\9\j\2\s\q\d\u\r\n\6\0\6\m\1\u\9\6\o\y\l\8\z\k\d\x\e\h\t\x\b\v\6\7\p\0\x\k\c\j\5\k\p\t\6\x\c\0\b\6\n\f\2\a\k\v\u\j\m\m\w\f\s\o\r\j\i\d\r\6\7\m\y\a\c\6\3\v\9\f\w\4\1\b\n\x\o\e\w\f\2\e\o\n ]] 00:06:16.478 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:16.478 19:44:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:16.478 [2024-07-24 19:44:44.994964] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:16.478 [2024-07-24 19:44:44.995057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62530 ] 00:06:16.478 [2024-07-24 19:44:45.129757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.734 [2024-07-24 19:44:45.294699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.734 [2024-07-24 19:44:45.376011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.248  Copying: 512/512 [B] (average 166 kBps) 00:06:17.248 00:06:17.248 19:44:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ a7tbmzfw0kg8h1vbqk4a4n6pud8uw6uxvy18h4fbs0g9m871hjmf4uz52lno9vq0krk1cbtw3f0mml373kzgzbms5gtnmym2a5zxvykiwrg97wymt9biubf2s77xdaqrdugd14fhyir0r66uo4ekrw7vbki3z5s4y257kc9s1qdngm2qole2flyhyh7q6h7khllbaw4pnl0flcceu81psv7qigpzu6vr3c6edbo9vjli0apjef0zyh5mbgvd83wk4hdoh8fz9u8x35rr905suvmxua4tu8zcue379pfeoxtb461j7i79rdv948gvkyutlhzxed8htmuqvejkqz5vqsovjz09wksz02d0yod4srggpglhcrlq4stjq0sqabyz12snx452zpklcz0ka70rfdilvcradgi0fsn9j2sqdurn606m1u96oyl8zkdxehtxbv67p0xkcj5kpt6xc0b6nf2akvujmmwfsorjidr67myac63v9fw41bnxoewf2eon == \a\7\t\b\m\z\f\w\0\k\g\8\h\1\v\b\q\k\4\a\4\n\6\p\u\d\8\u\w\6\u\x\v\y\1\8\h\4\f\b\s\0\g\9\m\8\7\1\h\j\m\f\4\u\z\5\2\l\n\o\9\v\q\0\k\r\k\1\c\b\t\w\3\f\0\m\m\l\3\7\3\k\z\g\z\b\m\s\5\g\t\n\m\y\m\2\a\5\z\x\v\y\k\i\w\r\g\9\7\w\y\m\t\9\b\i\u\b\f\2\s\7\7\x\d\a\q\r\d\u\g\d\1\4\f\h\y\i\r\0\r\6\6\u\o\4\e\k\r\w\7\v\b\k\i\3\z\5\s\4\y\2\5\7\k\c\9\s\1\q\d\n\g\m\2\q\o\l\e\2\f\l\y\h\y\h\7\q\6\h\7\k\h\l\l\b\a\w\4\p\n\l\0\f\l\c\c\e\u\8\1\p\s\v\7\q\i\g\p\z\u\6\v\r\3\c\6\e\d\b\o\9\v\j\l\i\0\a\p\j\e\f\0\z\y\h\5\m\b\g\v\d\8\3\w\k\4\h\d\o\h\8\f\z\9\u\8\x\3\5\r\r\9\0\5\s\u\v\m\x\u\a\4\t\u\8\z\c\u\e\3\7\9\p\f\e\o\x\t\b\4\6\1\j\7\i\7\9\r\d\v\9\4\8\g\v\k\y\u\t\l\h\z\x\e\d\8\h\t\m\u\q\v\e\j\k\q\z\5\v\q\s\o\v\j\z\0\9\w\k\s\z\0\2\d\0\y\o\d\4\s\r\g\g\p\g\l\h\c\r\l\q\4\s\t\j\q\0\s\q\a\b\y\z\1\2\s\n\x\4\5\2\z\p\k\l\c\z\0\k\a\7\0\r\f\d\i\l\v\c\r\a\d\g\i\0\f\s\n\9\j\2\s\q\d\u\r\n\6\0\6\m\1\u\9\6\o\y\l\8\z\k\d\x\e\h\t\x\b\v\6\7\p\0\x\k\c\j\5\k\p\t\6\x\c\0\b\6\n\f\2\a\k\v\u\j\m\m\w\f\s\o\r\j\i\d\r\6\7\m\y\a\c\6\3\v\9\f\w\4\1\b\n\x\o\e\w\f\2\e\o\n ]] 00:06:17.248 19:44:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:17.248 19:44:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:17.248 [2024-07-24 19:44:45.797320] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:17.248 [2024-07-24 19:44:45.797414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62544 ] 00:06:17.505 [2024-07-24 19:44:45.940170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.505 [2024-07-24 19:44:46.111968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.762 [2024-07-24 19:44:46.197196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.022  Copying: 512/512 [B] (average 500 kBps) 00:06:18.022 00:06:18.022 ************************************ 00:06:18.022 END TEST dd_flags_misc 00:06:18.022 ************************************ 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ a7tbmzfw0kg8h1vbqk4a4n6pud8uw6uxvy18h4fbs0g9m871hjmf4uz52lno9vq0krk1cbtw3f0mml373kzgzbms5gtnmym2a5zxvykiwrg97wymt9biubf2s77xdaqrdugd14fhyir0r66uo4ekrw7vbki3z5s4y257kc9s1qdngm2qole2flyhyh7q6h7khllbaw4pnl0flcceu81psv7qigpzu6vr3c6edbo9vjli0apjef0zyh5mbgvd83wk4hdoh8fz9u8x35rr905suvmxua4tu8zcue379pfeoxtb461j7i79rdv948gvkyutlhzxed8htmuqvejkqz5vqsovjz09wksz02d0yod4srggpglhcrlq4stjq0sqabyz12snx452zpklcz0ka70rfdilvcradgi0fsn9j2sqdurn606m1u96oyl8zkdxehtxbv67p0xkcj5kpt6xc0b6nf2akvujmmwfsorjidr67myac63v9fw41bnxoewf2eon == \a\7\t\b\m\z\f\w\0\k\g\8\h\1\v\b\q\k\4\a\4\n\6\p\u\d\8\u\w\6\u\x\v\y\1\8\h\4\f\b\s\0\g\9\m\8\7\1\h\j\m\f\4\u\z\5\2\l\n\o\9\v\q\0\k\r\k\1\c\b\t\w\3\f\0\m\m\l\3\7\3\k\z\g\z\b\m\s\5\g\t\n\m\y\m\2\a\5\z\x\v\y\k\i\w\r\g\9\7\w\y\m\t\9\b\i\u\b\f\2\s\7\7\x\d\a\q\r\d\u\g\d\1\4\f\h\y\i\r\0\r\6\6\u\o\4\e\k\r\w\7\v\b\k\i\3\z\5\s\4\y\2\5\7\k\c\9\s\1\q\d\n\g\m\2\q\o\l\e\2\f\l\y\h\y\h\7\q\6\h\7\k\h\l\l\b\a\w\4\p\n\l\0\f\l\c\c\e\u\8\1\p\s\v\7\q\i\g\p\z\u\6\v\r\3\c\6\e\d\b\o\9\v\j\l\i\0\a\p\j\e\f\0\z\y\h\5\m\b\g\v\d\8\3\w\k\4\h\d\o\h\8\f\z\9\u\8\x\3\5\r\r\9\0\5\s\u\v\m\x\u\a\4\t\u\8\z\c\u\e\3\7\9\p\f\e\o\x\t\b\4\6\1\j\7\i\7\9\r\d\v\9\4\8\g\v\k\y\u\t\l\h\z\x\e\d\8\h\t\m\u\q\v\e\j\k\q\z\5\v\q\s\o\v\j\z\0\9\w\k\s\z\0\2\d\0\y\o\d\4\s\r\g\g\p\g\l\h\c\r\l\q\4\s\t\j\q\0\s\q\a\b\y\z\1\2\s\n\x\4\5\2\z\p\k\l\c\z\0\k\a\7\0\r\f\d\i\l\v\c\r\a\d\g\i\0\f\s\n\9\j\2\s\q\d\u\r\n\6\0\6\m\1\u\9\6\o\y\l\8\z\k\d\x\e\h\t\x\b\v\6\7\p\0\x\k\c\j\5\k\p\t\6\x\c\0\b\6\n\f\2\a\k\v\u\j\m\m\w\f\s\o\r\j\i\d\r\6\7\m\y\a\c\6\3\v\9\f\w\4\1\b\n\x\o\e\w\f\2\e\o\n ]] 00:06:18.022 00:06:18.022 real 0m6.606s 00:06:18.022 user 0m4.014s 00:06:18.022 sys 0m3.258s 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:18.022 * Second test run, disabling liburing, forcing AIO 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.022 ************************************ 00:06:18.022 START TEST dd_flag_append_forced_aio 00:06:18.022 ************************************ 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=8a0eq04um37d2enr8qj14gusg77jsa33 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=fjypj5xfr3lxqif5nfla04u8inwt5pjn 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 8a0eq04um37d2enr8qj14gusg77jsa33 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s fjypj5xfr3lxqif5nfla04u8inwt5pjn 00:06:18.022 19:44:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:18.022 [2024-07-24 19:44:46.680567] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:18.022 [2024-07-24 19:44:46.680670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62574 ] 00:06:18.280 [2024-07-24 19:44:46.820750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.537 [2024-07-24 19:44:46.972293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.537 [2024-07-24 19:44:47.050998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.840  Copying: 32/32 [B] (average 31 kBps) 00:06:18.840 00:06:18.840 ************************************ 00:06:18.840 END TEST dd_flag_append_forced_aio 00:06:18.840 ************************************ 00:06:18.840 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ fjypj5xfr3lxqif5nfla04u8inwt5pjn8a0eq04um37d2enr8qj14gusg77jsa33 == \f\j\y\p\j\5\x\f\r\3\l\x\q\i\f\5\n\f\l\a\0\4\u\8\i\n\w\t\5\p\j\n\8\a\0\e\q\0\4\u\m\3\7\d\2\e\n\r\8\q\j\1\4\g\u\s\g\7\7\j\s\a\3\3 ]] 00:06:18.840 00:06:18.840 real 0m0.808s 00:06:18.840 user 0m0.473s 00:06:18.840 sys 0m0.211s 00:06:18.840 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.840 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.840 19:44:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:18.840 19:44:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.840 19:44:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.841 ************************************ 00:06:18.841 START TEST dd_flag_directory_forced_aio 00:06:18.841 ************************************ 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:18.841 19:44:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.099 [2024-07-24 19:44:47.546560] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:19.099 [2024-07-24 19:44:47.546699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62606 ] 00:06:19.099 [2024-07-24 19:44:47.693511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.356 [2024-07-24 19:44:47.846666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.356 [2024-07-24 19:44:47.926932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.356 [2024-07-24 19:44:47.979520] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:19.356 [2024-07-24 19:44:47.979586] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:19.356 [2024-07-24 19:44:47.979600] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.613 [2024-07-24 19:44:48.156592] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:19.871 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:19.871 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.871 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:19.871 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:19.871 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:19.871 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.871 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:19.871 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:19.872 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:19.872 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.872 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.872 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.872 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.872 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.872 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.872 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.872 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:19.872 19:44:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:19.872 [2024-07-24 19:44:48.366747] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:19.872 [2024-07-24 19:44:48.366884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62615 ] 00:06:19.872 [2024-07-24 19:44:48.511991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.131 [2024-07-24 19:44:48.666883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.131 [2024-07-24 19:44:48.745529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.131 [2024-07-24 19:44:48.794488] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:20.131 [2024-07-24 19:44:48.794549] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:20.131 [2024-07-24 19:44:48.794564] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.389 [2024-07-24 19:44:48.966660] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.647 00:06:20.647 real 0m1.619s 00:06:20.647 user 0m0.959s 00:06:20.647 sys 0m0.446s 00:06:20.647 ************************************ 00:06:20.647 END TEST dd_flag_directory_forced_aio 00:06:20.647 ************************************ 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:20.647 ************************************ 00:06:20.647 START TEST dd_flag_nofollow_forced_aio 00:06:20.647 ************************************ 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:20.647 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.647 [2024-07-24 19:44:49.237699] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:20.648 [2024-07-24 19:44:49.237811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62649 ] 00:06:20.906 [2024-07-24 19:44:49.379882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.906 [2024-07-24 19:44:49.535025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.164 [2024-07-24 19:44:49.616175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.164 [2024-07-24 19:44:49.667766] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:21.164 [2024-07-24 19:44:49.667836] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:21.164 [2024-07-24 19:44:49.667852] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.421 [2024-07-24 19:44:49.842709] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:21.421 19:44:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:21.421 [2024-07-24 19:44:50.030943] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:21.421 [2024-07-24 19:44:50.031044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62661 ] 00:06:21.678 [2024-07-24 19:44:50.167274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.678 [2024-07-24 19:44:50.318505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.936 [2024-07-24 19:44:50.398153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.936 [2024-07-24 19:44:50.447555] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:21.936 [2024-07-24 19:44:50.447614] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:21.936 [2024-07-24 19:44:50.447629] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.193 [2024-07-24 19:44:50.618728] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:22.193 19:44:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:22.193 19:44:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.193 19:44:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:22.193 19:44:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:22.193 19:44:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:22.193 19:44:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.193 19:44:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:22.193 19:44:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:22.193 19:44:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:22.193 19:44:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.193 [2024-07-24 19:44:50.824321] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:22.194 [2024-07-24 19:44:50.824421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62672 ] 00:06:22.452 [2024-07-24 19:44:50.966152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.711 [2024-07-24 19:44:51.119693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.711 [2024-07-24 19:44:51.200427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.969  Copying: 512/512 [B] (average 500 kBps) 00:06:22.969 00:06:22.969 ************************************ 00:06:22.969 END TEST dd_flag_nofollow_forced_aio 00:06:22.969 ************************************ 00:06:22.969 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 12uij3kwb7coqg3wg6qlxje7p2u9pr4er285whs3m7h8c9dt0vwu73tcw91ja2rubqeqkbngf1n9ibz20rpv0zn9xebpvtz4ghsnci7dd3vatyar24qxnaguqh8vq0xb4d85nitawzzlwixor9uummsqnf8ov4b76eom6dlfo0os05jgrvr6ou2ounznggxx0393ay0upkrcodj4l80mtud11k0x85n6i4diy4twwvvcgmxt8t7pp1qns9aosjemi4g28x031shouhzzzjfzng4ij853eu6ttj7ltnc0tpdqye2das77zlcktt9hdes1m0vf1z3boa94l711z4a6zr8b34lsxfsks1fxdjtosvun18a3er0phglyij3h39sw7m936he5rz7ohbkeb2bnm5q8i03k55us6h7yjdg3kwky3v6de23xfcnyonklwkux3xqp96zvs683bwnualexaccau8bzexmqwchqf59bqyyr9cmldq87qmv92zw5x3im == \1\2\u\i\j\3\k\w\b\7\c\o\q\g\3\w\g\6\q\l\x\j\e\7\p\2\u\9\p\r\4\e\r\2\8\5\w\h\s\3\m\7\h\8\c\9\d\t\0\v\w\u\7\3\t\c\w\9\1\j\a\2\r\u\b\q\e\q\k\b\n\g\f\1\n\9\i\b\z\2\0\r\p\v\0\z\n\9\x\e\b\p\v\t\z\4\g\h\s\n\c\i\7\d\d\3\v\a\t\y\a\r\2\4\q\x\n\a\g\u\q\h\8\v\q\0\x\b\4\d\8\5\n\i\t\a\w\z\z\l\w\i\x\o\r\9\u\u\m\m\s\q\n\f\8\o\v\4\b\7\6\e\o\m\6\d\l\f\o\0\o\s\0\5\j\g\r\v\r\6\o\u\2\o\u\n\z\n\g\g\x\x\0\3\9\3\a\y\0\u\p\k\r\c\o\d\j\4\l\8\0\m\t\u\d\1\1\k\0\x\8\5\n\6\i\4\d\i\y\4\t\w\w\v\v\c\g\m\x\t\8\t\7\p\p\1\q\n\s\9\a\o\s\j\e\m\i\4\g\2\8\x\0\3\1\s\h\o\u\h\z\z\z\j\f\z\n\g\4\i\j\8\5\3\e\u\6\t\t\j\7\l\t\n\c\0\t\p\d\q\y\e\2\d\a\s\7\7\z\l\c\k\t\t\9\h\d\e\s\1\m\0\v\f\1\z\3\b\o\a\9\4\l\7\1\1\z\4\a\6\z\r\8\b\3\4\l\s\x\f\s\k\s\1\f\x\d\j\t\o\s\v\u\n\1\8\a\3\e\r\0\p\h\g\l\y\i\j\3\h\3\9\s\w\7\m\9\3\6\h\e\5\r\z\7\o\h\b\k\e\b\2\b\n\m\5\q\8\i\0\3\k\5\5\u\s\6\h\7\y\j\d\g\3\k\w\k\y\3\v\6\d\e\2\3\x\f\c\n\y\o\n\k\l\w\k\u\x\3\x\q\p\9\6\z\v\s\6\8\3\b\w\n\u\a\l\e\x\a\c\c\a\u\8\b\z\e\x\m\q\w\c\h\q\f\5\9\b\q\y\y\r\9\c\m\l\d\q\8\7\q\m\v\9\2\z\w\5\x\3\i\m ]] 00:06:22.969 00:06:22.969 real 0m2.422s 00:06:22.969 user 0m1.454s 00:06:22.969 sys 0m0.624s 00:06:22.969 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.969 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:23.228 19:44:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:23.228 19:44:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.228 19:44:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.228 19:44:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:23.228 ************************************ 00:06:23.228 START TEST dd_flag_noatime_forced_aio 00:06:23.228 ************************************ 00:06:23.228 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:06:23.228 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:23.228 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:23.228 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:23.228 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:23.228 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:23.229 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:23.229 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721850291 00:06:23.229 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.229 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721850291 00:06:23.229 19:44:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:24.164 19:44:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:24.164 [2024-07-24 19:44:52.748783] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:24.164 [2024-07-24 19:44:52.748884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62719 ] 00:06:24.422 [2024-07-24 19:44:52.896984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.422 [2024-07-24 19:44:53.070191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.681 [2024-07-24 19:44:53.151048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.939  Copying: 512/512 [B] (average 500 kBps) 00:06:24.940 00:06:24.940 19:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:24.940 19:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721850291 )) 00:06:24.940 19:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:24.940 19:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721850291 )) 00:06:24.940 19:44:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:24.940 [2024-07-24 19:44:53.590609] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:24.940 [2024-07-24 19:44:53.590705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62736 ] 00:06:25.197 [2024-07-24 19:44:53.722169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.456 [2024-07-24 19:44:53.869314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.456 [2024-07-24 19:44:53.946747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.713  Copying: 512/512 [B] (average 500 kBps) 00:06:25.713 00:06:25.713 19:44:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.713 19:44:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721850293 )) 00:06:25.713 00:06:25.713 real 0m2.675s 00:06:25.713 user 0m0.967s 00:06:25.713 sys 0m0.463s 00:06:25.713 19:44:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.713 ************************************ 00:06:25.713 END TEST dd_flag_noatime_forced_aio 00:06:25.713 ************************************ 00:06:25.713 19:44:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:25.971 ************************************ 00:06:25.971 START TEST dd_flags_misc_forced_aio 00:06:25.971 ************************************ 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.971 19:44:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:25.971 [2024-07-24 19:44:54.471114] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:25.971 [2024-07-24 19:44:54.471260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62762 ] 00:06:25.971 [2024-07-24 19:44:54.613294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.229 [2024-07-24 19:44:54.765833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.229 [2024-07-24 19:44:54.845425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.746  Copying: 512/512 [B] (average 500 kBps) 00:06:26.746 00:06:26.746 19:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6k2fwhzjbwidbm1iran7rzs9xw1vbqigqvdwyzmmeau85ne6jern4ivq3jw02iuvng98aldto70n95vkf4abs900vkk44uuqxyv0tukhse2sgo3t0cnx16g6g2f79pjvqesyk0f972q67or35c5zeqs3e4rwl38sk7qr4iem3owaas1pthykwynhe0kkqbxthuq2mmfx89uyer6gah9of3pgncjv57h0el3uocn6tusmtiplbqe87fj2hgvai80krqmrm2bygnvvlqr7ovqc1ks8hcq7365spz4nk42nqdnof6jq1pc80m0e7441l1v8uz7r3m2ncsqny3g19ksbqlqw9pkm1vrn6esnakllitzs3gds8i5mn5ggsatudikdeowembcq6jg9i6g22rkbbyd3o01dpn116mogcim51nr6ve3h4bkotevtig38qz0oimdqlno6pcf95vgiqqg1jk65w6fvcuy71dyices1u9l2hqmsf7qhsyvfqdicxys4 == \6\k\2\f\w\h\z\j\b\w\i\d\b\m\1\i\r\a\n\7\r\z\s\9\x\w\1\v\b\q\i\g\q\v\d\w\y\z\m\m\e\a\u\8\5\n\e\6\j\e\r\n\4\i\v\q\3\j\w\0\2\i\u\v\n\g\9\8\a\l\d\t\o\7\0\n\9\5\v\k\f\4\a\b\s\9\0\0\v\k\k\4\4\u\u\q\x\y\v\0\t\u\k\h\s\e\2\s\g\o\3\t\0\c\n\x\1\6\g\6\g\2\f\7\9\p\j\v\q\e\s\y\k\0\f\9\7\2\q\6\7\o\r\3\5\c\5\z\e\q\s\3\e\4\r\w\l\3\8\s\k\7\q\r\4\i\e\m\3\o\w\a\a\s\1\p\t\h\y\k\w\y\n\h\e\0\k\k\q\b\x\t\h\u\q\2\m\m\f\x\8\9\u\y\e\r\6\g\a\h\9\o\f\3\p\g\n\c\j\v\5\7\h\0\e\l\3\u\o\c\n\6\t\u\s\m\t\i\p\l\b\q\e\8\7\f\j\2\h\g\v\a\i\8\0\k\r\q\m\r\m\2\b\y\g\n\v\v\l\q\r\7\o\v\q\c\1\k\s\8\h\c\q\7\3\6\5\s\p\z\4\n\k\4\2\n\q\d\n\o\f\6\j\q\1\p\c\8\0\m\0\e\7\4\4\1\l\1\v\8\u\z\7\r\3\m\2\n\c\s\q\n\y\3\g\1\9\k\s\b\q\l\q\w\9\p\k\m\1\v\r\n\6\e\s\n\a\k\l\l\i\t\z\s\3\g\d\s\8\i\5\m\n\5\g\g\s\a\t\u\d\i\k\d\e\o\w\e\m\b\c\q\6\j\g\9\i\6\g\2\2\r\k\b\b\y\d\3\o\0\1\d\p\n\1\1\6\m\o\g\c\i\m\5\1\n\r\6\v\e\3\h\4\b\k\o\t\e\v\t\i\g\3\8\q\z\0\o\i\m\d\q\l\n\o\6\p\c\f\9\5\v\g\i\q\q\g\1\j\k\6\5\w\6\f\v\c\u\y\7\1\d\y\i\c\e\s\1\u\9\l\2\h\q\m\s\f\7\q\h\s\y\v\f\q\d\i\c\x\y\s\4 ]] 00:06:26.746 19:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:26.746 19:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:26.746 [2024-07-24 19:44:55.275463] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:26.746 [2024-07-24 19:44:55.275563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62770 ] 00:06:27.004 [2024-07-24 19:44:55.414528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.004 [2024-07-24 19:44:55.572062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.004 [2024-07-24 19:44:55.653594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.521  Copying: 512/512 [B] (average 500 kBps) 00:06:27.521 00:06:27.521 19:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6k2fwhzjbwidbm1iran7rzs9xw1vbqigqvdwyzmmeau85ne6jern4ivq3jw02iuvng98aldto70n95vkf4abs900vkk44uuqxyv0tukhse2sgo3t0cnx16g6g2f79pjvqesyk0f972q67or35c5zeqs3e4rwl38sk7qr4iem3owaas1pthykwynhe0kkqbxthuq2mmfx89uyer6gah9of3pgncjv57h0el3uocn6tusmtiplbqe87fj2hgvai80krqmrm2bygnvvlqr7ovqc1ks8hcq7365spz4nk42nqdnof6jq1pc80m0e7441l1v8uz7r3m2ncsqny3g19ksbqlqw9pkm1vrn6esnakllitzs3gds8i5mn5ggsatudikdeowembcq6jg9i6g22rkbbyd3o01dpn116mogcim51nr6ve3h4bkotevtig38qz0oimdqlno6pcf95vgiqqg1jk65w6fvcuy71dyices1u9l2hqmsf7qhsyvfqdicxys4 == \6\k\2\f\w\h\z\j\b\w\i\d\b\m\1\i\r\a\n\7\r\z\s\9\x\w\1\v\b\q\i\g\q\v\d\w\y\z\m\m\e\a\u\8\5\n\e\6\j\e\r\n\4\i\v\q\3\j\w\0\2\i\u\v\n\g\9\8\a\l\d\t\o\7\0\n\9\5\v\k\f\4\a\b\s\9\0\0\v\k\k\4\4\u\u\q\x\y\v\0\t\u\k\h\s\e\2\s\g\o\3\t\0\c\n\x\1\6\g\6\g\2\f\7\9\p\j\v\q\e\s\y\k\0\f\9\7\2\q\6\7\o\r\3\5\c\5\z\e\q\s\3\e\4\r\w\l\3\8\s\k\7\q\r\4\i\e\m\3\o\w\a\a\s\1\p\t\h\y\k\w\y\n\h\e\0\k\k\q\b\x\t\h\u\q\2\m\m\f\x\8\9\u\y\e\r\6\g\a\h\9\o\f\3\p\g\n\c\j\v\5\7\h\0\e\l\3\u\o\c\n\6\t\u\s\m\t\i\p\l\b\q\e\8\7\f\j\2\h\g\v\a\i\8\0\k\r\q\m\r\m\2\b\y\g\n\v\v\l\q\r\7\o\v\q\c\1\k\s\8\h\c\q\7\3\6\5\s\p\z\4\n\k\4\2\n\q\d\n\o\f\6\j\q\1\p\c\8\0\m\0\e\7\4\4\1\l\1\v\8\u\z\7\r\3\m\2\n\c\s\q\n\y\3\g\1\9\k\s\b\q\l\q\w\9\p\k\m\1\v\r\n\6\e\s\n\a\k\l\l\i\t\z\s\3\g\d\s\8\i\5\m\n\5\g\g\s\a\t\u\d\i\k\d\e\o\w\e\m\b\c\q\6\j\g\9\i\6\g\2\2\r\k\b\b\y\d\3\o\0\1\d\p\n\1\1\6\m\o\g\c\i\m\5\1\n\r\6\v\e\3\h\4\b\k\o\t\e\v\t\i\g\3\8\q\z\0\o\i\m\d\q\l\n\o\6\p\c\f\9\5\v\g\i\q\q\g\1\j\k\6\5\w\6\f\v\c\u\y\7\1\d\y\i\c\e\s\1\u\9\l\2\h\q\m\s\f\7\q\h\s\y\v\f\q\d\i\c\x\y\s\4 ]] 00:06:27.521 19:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:27.521 19:44:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:27.521 [2024-07-24 19:44:56.026624] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:27.521 [2024-07-24 19:44:56.026746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62783 ] 00:06:27.521 [2024-07-24 19:44:56.172808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.779 [2024-07-24 19:44:56.289971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.780 [2024-07-24 19:44:56.337371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.038  Copying: 512/512 [B] (average 166 kBps) 00:06:28.038 00:06:28.038 19:44:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6k2fwhzjbwidbm1iran7rzs9xw1vbqigqvdwyzmmeau85ne6jern4ivq3jw02iuvng98aldto70n95vkf4abs900vkk44uuqxyv0tukhse2sgo3t0cnx16g6g2f79pjvqesyk0f972q67or35c5zeqs3e4rwl38sk7qr4iem3owaas1pthykwynhe0kkqbxthuq2mmfx89uyer6gah9of3pgncjv57h0el3uocn6tusmtiplbqe87fj2hgvai80krqmrm2bygnvvlqr7ovqc1ks8hcq7365spz4nk42nqdnof6jq1pc80m0e7441l1v8uz7r3m2ncsqny3g19ksbqlqw9pkm1vrn6esnakllitzs3gds8i5mn5ggsatudikdeowembcq6jg9i6g22rkbbyd3o01dpn116mogcim51nr6ve3h4bkotevtig38qz0oimdqlno6pcf95vgiqqg1jk65w6fvcuy71dyices1u9l2hqmsf7qhsyvfqdicxys4 == \6\k\2\f\w\h\z\j\b\w\i\d\b\m\1\i\r\a\n\7\r\z\s\9\x\w\1\v\b\q\i\g\q\v\d\w\y\z\m\m\e\a\u\8\5\n\e\6\j\e\r\n\4\i\v\q\3\j\w\0\2\i\u\v\n\g\9\8\a\l\d\t\o\7\0\n\9\5\v\k\f\4\a\b\s\9\0\0\v\k\k\4\4\u\u\q\x\y\v\0\t\u\k\h\s\e\2\s\g\o\3\t\0\c\n\x\1\6\g\6\g\2\f\7\9\p\j\v\q\e\s\y\k\0\f\9\7\2\q\6\7\o\r\3\5\c\5\z\e\q\s\3\e\4\r\w\l\3\8\s\k\7\q\r\4\i\e\m\3\o\w\a\a\s\1\p\t\h\y\k\w\y\n\h\e\0\k\k\q\b\x\t\h\u\q\2\m\m\f\x\8\9\u\y\e\r\6\g\a\h\9\o\f\3\p\g\n\c\j\v\5\7\h\0\e\l\3\u\o\c\n\6\t\u\s\m\t\i\p\l\b\q\e\8\7\f\j\2\h\g\v\a\i\8\0\k\r\q\m\r\m\2\b\y\g\n\v\v\l\q\r\7\o\v\q\c\1\k\s\8\h\c\q\7\3\6\5\s\p\z\4\n\k\4\2\n\q\d\n\o\f\6\j\q\1\p\c\8\0\m\0\e\7\4\4\1\l\1\v\8\u\z\7\r\3\m\2\n\c\s\q\n\y\3\g\1\9\k\s\b\q\l\q\w\9\p\k\m\1\v\r\n\6\e\s\n\a\k\l\l\i\t\z\s\3\g\d\s\8\i\5\m\n\5\g\g\s\a\t\u\d\i\k\d\e\o\w\e\m\b\c\q\6\j\g\9\i\6\g\2\2\r\k\b\b\y\d\3\o\0\1\d\p\n\1\1\6\m\o\g\c\i\m\5\1\n\r\6\v\e\3\h\4\b\k\o\t\e\v\t\i\g\3\8\q\z\0\o\i\m\d\q\l\n\o\6\p\c\f\9\5\v\g\i\q\q\g\1\j\k\6\5\w\6\f\v\c\u\y\7\1\d\y\i\c\e\s\1\u\9\l\2\h\q\m\s\f\7\q\h\s\y\v\f\q\d\i\c\x\y\s\4 ]] 00:06:28.038 19:44:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:28.038 19:44:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:28.038 [2024-07-24 19:44:56.636703] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:28.038 [2024-07-24 19:44:56.636781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62796 ] 00:06:28.296 [2024-07-24 19:44:56.768494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.296 [2024-07-24 19:44:56.875562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.296 [2024-07-24 19:44:56.917728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.555  Copying: 512/512 [B] (average 250 kBps) 00:06:28.555 00:06:28.555 19:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6k2fwhzjbwidbm1iran7rzs9xw1vbqigqvdwyzmmeau85ne6jern4ivq3jw02iuvng98aldto70n95vkf4abs900vkk44uuqxyv0tukhse2sgo3t0cnx16g6g2f79pjvqesyk0f972q67or35c5zeqs3e4rwl38sk7qr4iem3owaas1pthykwynhe0kkqbxthuq2mmfx89uyer6gah9of3pgncjv57h0el3uocn6tusmtiplbqe87fj2hgvai80krqmrm2bygnvvlqr7ovqc1ks8hcq7365spz4nk42nqdnof6jq1pc80m0e7441l1v8uz7r3m2ncsqny3g19ksbqlqw9pkm1vrn6esnakllitzs3gds8i5mn5ggsatudikdeowembcq6jg9i6g22rkbbyd3o01dpn116mogcim51nr6ve3h4bkotevtig38qz0oimdqlno6pcf95vgiqqg1jk65w6fvcuy71dyices1u9l2hqmsf7qhsyvfqdicxys4 == \6\k\2\f\w\h\z\j\b\w\i\d\b\m\1\i\r\a\n\7\r\z\s\9\x\w\1\v\b\q\i\g\q\v\d\w\y\z\m\m\e\a\u\8\5\n\e\6\j\e\r\n\4\i\v\q\3\j\w\0\2\i\u\v\n\g\9\8\a\l\d\t\o\7\0\n\9\5\v\k\f\4\a\b\s\9\0\0\v\k\k\4\4\u\u\q\x\y\v\0\t\u\k\h\s\e\2\s\g\o\3\t\0\c\n\x\1\6\g\6\g\2\f\7\9\p\j\v\q\e\s\y\k\0\f\9\7\2\q\6\7\o\r\3\5\c\5\z\e\q\s\3\e\4\r\w\l\3\8\s\k\7\q\r\4\i\e\m\3\o\w\a\a\s\1\p\t\h\y\k\w\y\n\h\e\0\k\k\q\b\x\t\h\u\q\2\m\m\f\x\8\9\u\y\e\r\6\g\a\h\9\o\f\3\p\g\n\c\j\v\5\7\h\0\e\l\3\u\o\c\n\6\t\u\s\m\t\i\p\l\b\q\e\8\7\f\j\2\h\g\v\a\i\8\0\k\r\q\m\r\m\2\b\y\g\n\v\v\l\q\r\7\o\v\q\c\1\k\s\8\h\c\q\7\3\6\5\s\p\z\4\n\k\4\2\n\q\d\n\o\f\6\j\q\1\p\c\8\0\m\0\e\7\4\4\1\l\1\v\8\u\z\7\r\3\m\2\n\c\s\q\n\y\3\g\1\9\k\s\b\q\l\q\w\9\p\k\m\1\v\r\n\6\e\s\n\a\k\l\l\i\t\z\s\3\g\d\s\8\i\5\m\n\5\g\g\s\a\t\u\d\i\k\d\e\o\w\e\m\b\c\q\6\j\g\9\i\6\g\2\2\r\k\b\b\y\d\3\o\0\1\d\p\n\1\1\6\m\o\g\c\i\m\5\1\n\r\6\v\e\3\h\4\b\k\o\t\e\v\t\i\g\3\8\q\z\0\o\i\m\d\q\l\n\o\6\p\c\f\9\5\v\g\i\q\q\g\1\j\k\6\5\w\6\f\v\c\u\y\7\1\d\y\i\c\e\s\1\u\9\l\2\h\q\m\s\f\7\q\h\s\y\v\f\q\d\i\c\x\y\s\4 ]] 00:06:28.555 19:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:28.555 19:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:28.555 19:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:28.555 19:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:28.555 19:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:28.555 19:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:28.555 [2024-07-24 19:44:57.213730] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:28.555 [2024-07-24 19:44:57.213833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62798 ] 00:06:28.814 [2024-07-24 19:44:57.353532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.814 [2024-07-24 19:44:57.459522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.073 [2024-07-24 19:44:57.502661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.073  Copying: 512/512 [B] (average 500 kBps) 00:06:29.073 00:06:29.332 19:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ l04la6492rzl8hece9mmi6cjpqybnrkbj41gfrphrfwy24y02kw11n8v9yo4ubnq95o9v8tvi9z8rg9oya26ymu8th7cadtb38iqtsiu6kbhidhl5xpjbnnqoe3mxk3nir4yd0ilamvie76we4vz7dimpybca7qmq2083mga28kd7rd76r1nogfhqhoqrwe9zna752ukp6ro0u9cpw98sl97t0qua87umc2x1z2oqfypc5yisol34vw39tvg1yh12kgaaamtfja0r96qern5txqs0fm65iu9mxmxn1u5lvn7tji8t4h0kehrgxp7y7joyw2xspck42zd8lmr4t9hjoj8yhm52855ec132eyjk1x5d2gn6vntstdwl5n4im6y04dpmb76s6odqllhzf09m8e052heychi8ejrozi3vpoa9cjpr1yd8shiss0vyguhxrfg44kt7ur9dmeg5l374q51rvkys4leh97j5lpv5v3xxct5byxuvfkpjto9y5o0 == \l\0\4\l\a\6\4\9\2\r\z\l\8\h\e\c\e\9\m\m\i\6\c\j\p\q\y\b\n\r\k\b\j\4\1\g\f\r\p\h\r\f\w\y\2\4\y\0\2\k\w\1\1\n\8\v\9\y\o\4\u\b\n\q\9\5\o\9\v\8\t\v\i\9\z\8\r\g\9\o\y\a\2\6\y\m\u\8\t\h\7\c\a\d\t\b\3\8\i\q\t\s\i\u\6\k\b\h\i\d\h\l\5\x\p\j\b\n\n\q\o\e\3\m\x\k\3\n\i\r\4\y\d\0\i\l\a\m\v\i\e\7\6\w\e\4\v\z\7\d\i\m\p\y\b\c\a\7\q\m\q\2\0\8\3\m\g\a\2\8\k\d\7\r\d\7\6\r\1\n\o\g\f\h\q\h\o\q\r\w\e\9\z\n\a\7\5\2\u\k\p\6\r\o\0\u\9\c\p\w\9\8\s\l\9\7\t\0\q\u\a\8\7\u\m\c\2\x\1\z\2\o\q\f\y\p\c\5\y\i\s\o\l\3\4\v\w\3\9\t\v\g\1\y\h\1\2\k\g\a\a\a\m\t\f\j\a\0\r\9\6\q\e\r\n\5\t\x\q\s\0\f\m\6\5\i\u\9\m\x\m\x\n\1\u\5\l\v\n\7\t\j\i\8\t\4\h\0\k\e\h\r\g\x\p\7\y\7\j\o\y\w\2\x\s\p\c\k\4\2\z\d\8\l\m\r\4\t\9\h\j\o\j\8\y\h\m\5\2\8\5\5\e\c\1\3\2\e\y\j\k\1\x\5\d\2\g\n\6\v\n\t\s\t\d\w\l\5\n\4\i\m\6\y\0\4\d\p\m\b\7\6\s\6\o\d\q\l\l\h\z\f\0\9\m\8\e\0\5\2\h\e\y\c\h\i\8\e\j\r\o\z\i\3\v\p\o\a\9\c\j\p\r\1\y\d\8\s\h\i\s\s\0\v\y\g\u\h\x\r\f\g\4\4\k\t\7\u\r\9\d\m\e\g\5\l\3\7\4\q\5\1\r\v\k\y\s\4\l\e\h\9\7\j\5\l\p\v\5\v\3\x\x\c\t\5\b\y\x\u\v\f\k\p\j\t\o\9\y\5\o\0 ]] 00:06:29.332 19:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.332 19:44:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:29.332 [2024-07-24 19:44:57.797722] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:29.332 [2024-07-24 19:44:57.797825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62811 ] 00:06:29.332 [2024-07-24 19:44:57.938628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.591 [2024-07-24 19:44:58.045779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.591 [2024-07-24 19:44:58.089762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.850  Copying: 512/512 [B] (average 500 kBps) 00:06:29.850 00:06:29.850 19:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ l04la6492rzl8hece9mmi6cjpqybnrkbj41gfrphrfwy24y02kw11n8v9yo4ubnq95o9v8tvi9z8rg9oya26ymu8th7cadtb38iqtsiu6kbhidhl5xpjbnnqoe3mxk3nir4yd0ilamvie76we4vz7dimpybca7qmq2083mga28kd7rd76r1nogfhqhoqrwe9zna752ukp6ro0u9cpw98sl97t0qua87umc2x1z2oqfypc5yisol34vw39tvg1yh12kgaaamtfja0r96qern5txqs0fm65iu9mxmxn1u5lvn7tji8t4h0kehrgxp7y7joyw2xspck42zd8lmr4t9hjoj8yhm52855ec132eyjk1x5d2gn6vntstdwl5n4im6y04dpmb76s6odqllhzf09m8e052heychi8ejrozi3vpoa9cjpr1yd8shiss0vyguhxrfg44kt7ur9dmeg5l374q51rvkys4leh97j5lpv5v3xxct5byxuvfkpjto9y5o0 == \l\0\4\l\a\6\4\9\2\r\z\l\8\h\e\c\e\9\m\m\i\6\c\j\p\q\y\b\n\r\k\b\j\4\1\g\f\r\p\h\r\f\w\y\2\4\y\0\2\k\w\1\1\n\8\v\9\y\o\4\u\b\n\q\9\5\o\9\v\8\t\v\i\9\z\8\r\g\9\o\y\a\2\6\y\m\u\8\t\h\7\c\a\d\t\b\3\8\i\q\t\s\i\u\6\k\b\h\i\d\h\l\5\x\p\j\b\n\n\q\o\e\3\m\x\k\3\n\i\r\4\y\d\0\i\l\a\m\v\i\e\7\6\w\e\4\v\z\7\d\i\m\p\y\b\c\a\7\q\m\q\2\0\8\3\m\g\a\2\8\k\d\7\r\d\7\6\r\1\n\o\g\f\h\q\h\o\q\r\w\e\9\z\n\a\7\5\2\u\k\p\6\r\o\0\u\9\c\p\w\9\8\s\l\9\7\t\0\q\u\a\8\7\u\m\c\2\x\1\z\2\o\q\f\y\p\c\5\y\i\s\o\l\3\4\v\w\3\9\t\v\g\1\y\h\1\2\k\g\a\a\a\m\t\f\j\a\0\r\9\6\q\e\r\n\5\t\x\q\s\0\f\m\6\5\i\u\9\m\x\m\x\n\1\u\5\l\v\n\7\t\j\i\8\t\4\h\0\k\e\h\r\g\x\p\7\y\7\j\o\y\w\2\x\s\p\c\k\4\2\z\d\8\l\m\r\4\t\9\h\j\o\j\8\y\h\m\5\2\8\5\5\e\c\1\3\2\e\y\j\k\1\x\5\d\2\g\n\6\v\n\t\s\t\d\w\l\5\n\4\i\m\6\y\0\4\d\p\m\b\7\6\s\6\o\d\q\l\l\h\z\f\0\9\m\8\e\0\5\2\h\e\y\c\h\i\8\e\j\r\o\z\i\3\v\p\o\a\9\c\j\p\r\1\y\d\8\s\h\i\s\s\0\v\y\g\u\h\x\r\f\g\4\4\k\t\7\u\r\9\d\m\e\g\5\l\3\7\4\q\5\1\r\v\k\y\s\4\l\e\h\9\7\j\5\l\p\v\5\v\3\x\x\c\t\5\b\y\x\u\v\f\k\p\j\t\o\9\y\5\o\0 ]] 00:06:29.850 19:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.850 19:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:29.850 [2024-07-24 19:44:58.358230] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:29.850 [2024-07-24 19:44:58.358309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62813 ] 00:06:29.850 [2024-07-24 19:44:58.495577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.109 [2024-07-24 19:44:58.599599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.109 [2024-07-24 19:44:58.643182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.367  Copying: 512/512 [B] (average 500 kBps) 00:06:30.367 00:06:30.367 19:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ l04la6492rzl8hece9mmi6cjpqybnrkbj41gfrphrfwy24y02kw11n8v9yo4ubnq95o9v8tvi9z8rg9oya26ymu8th7cadtb38iqtsiu6kbhidhl5xpjbnnqoe3mxk3nir4yd0ilamvie76we4vz7dimpybca7qmq2083mga28kd7rd76r1nogfhqhoqrwe9zna752ukp6ro0u9cpw98sl97t0qua87umc2x1z2oqfypc5yisol34vw39tvg1yh12kgaaamtfja0r96qern5txqs0fm65iu9mxmxn1u5lvn7tji8t4h0kehrgxp7y7joyw2xspck42zd8lmr4t9hjoj8yhm52855ec132eyjk1x5d2gn6vntstdwl5n4im6y04dpmb76s6odqllhzf09m8e052heychi8ejrozi3vpoa9cjpr1yd8shiss0vyguhxrfg44kt7ur9dmeg5l374q51rvkys4leh97j5lpv5v3xxct5byxuvfkpjto9y5o0 == \l\0\4\l\a\6\4\9\2\r\z\l\8\h\e\c\e\9\m\m\i\6\c\j\p\q\y\b\n\r\k\b\j\4\1\g\f\r\p\h\r\f\w\y\2\4\y\0\2\k\w\1\1\n\8\v\9\y\o\4\u\b\n\q\9\5\o\9\v\8\t\v\i\9\z\8\r\g\9\o\y\a\2\6\y\m\u\8\t\h\7\c\a\d\t\b\3\8\i\q\t\s\i\u\6\k\b\h\i\d\h\l\5\x\p\j\b\n\n\q\o\e\3\m\x\k\3\n\i\r\4\y\d\0\i\l\a\m\v\i\e\7\6\w\e\4\v\z\7\d\i\m\p\y\b\c\a\7\q\m\q\2\0\8\3\m\g\a\2\8\k\d\7\r\d\7\6\r\1\n\o\g\f\h\q\h\o\q\r\w\e\9\z\n\a\7\5\2\u\k\p\6\r\o\0\u\9\c\p\w\9\8\s\l\9\7\t\0\q\u\a\8\7\u\m\c\2\x\1\z\2\o\q\f\y\p\c\5\y\i\s\o\l\3\4\v\w\3\9\t\v\g\1\y\h\1\2\k\g\a\a\a\m\t\f\j\a\0\r\9\6\q\e\r\n\5\t\x\q\s\0\f\m\6\5\i\u\9\m\x\m\x\n\1\u\5\l\v\n\7\t\j\i\8\t\4\h\0\k\e\h\r\g\x\p\7\y\7\j\o\y\w\2\x\s\p\c\k\4\2\z\d\8\l\m\r\4\t\9\h\j\o\j\8\y\h\m\5\2\8\5\5\e\c\1\3\2\e\y\j\k\1\x\5\d\2\g\n\6\v\n\t\s\t\d\w\l\5\n\4\i\m\6\y\0\4\d\p\m\b\7\6\s\6\o\d\q\l\l\h\z\f\0\9\m\8\e\0\5\2\h\e\y\c\h\i\8\e\j\r\o\z\i\3\v\p\o\a\9\c\j\p\r\1\y\d\8\s\h\i\s\s\0\v\y\g\u\h\x\r\f\g\4\4\k\t\7\u\r\9\d\m\e\g\5\l\3\7\4\q\5\1\r\v\k\y\s\4\l\e\h\9\7\j\5\l\p\v\5\v\3\x\x\c\t\5\b\y\x\u\v\f\k\p\j\t\o\9\y\5\o\0 ]] 00:06:30.367 19:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.367 19:44:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:30.367 [2024-07-24 19:44:58.928512] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:30.367 [2024-07-24 19:44:58.928613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62826 ] 00:06:30.626 [2024-07-24 19:44:59.071095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.626 [2024-07-24 19:44:59.187387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.626 [2024-07-24 19:44:59.239724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.926  Copying: 512/512 [B] (average 166 kBps) 00:06:30.926 00:06:30.926 19:44:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ l04la6492rzl8hece9mmi6cjpqybnrkbj41gfrphrfwy24y02kw11n8v9yo4ubnq95o9v8tvi9z8rg9oya26ymu8th7cadtb38iqtsiu6kbhidhl5xpjbnnqoe3mxk3nir4yd0ilamvie76we4vz7dimpybca7qmq2083mga28kd7rd76r1nogfhqhoqrwe9zna752ukp6ro0u9cpw98sl97t0qua87umc2x1z2oqfypc5yisol34vw39tvg1yh12kgaaamtfja0r96qern5txqs0fm65iu9mxmxn1u5lvn7tji8t4h0kehrgxp7y7joyw2xspck42zd8lmr4t9hjoj8yhm52855ec132eyjk1x5d2gn6vntstdwl5n4im6y04dpmb76s6odqllhzf09m8e052heychi8ejrozi3vpoa9cjpr1yd8shiss0vyguhxrfg44kt7ur9dmeg5l374q51rvkys4leh97j5lpv5v3xxct5byxuvfkpjto9y5o0 == \l\0\4\l\a\6\4\9\2\r\z\l\8\h\e\c\e\9\m\m\i\6\c\j\p\q\y\b\n\r\k\b\j\4\1\g\f\r\p\h\r\f\w\y\2\4\y\0\2\k\w\1\1\n\8\v\9\y\o\4\u\b\n\q\9\5\o\9\v\8\t\v\i\9\z\8\r\g\9\o\y\a\2\6\y\m\u\8\t\h\7\c\a\d\t\b\3\8\i\q\t\s\i\u\6\k\b\h\i\d\h\l\5\x\p\j\b\n\n\q\o\e\3\m\x\k\3\n\i\r\4\y\d\0\i\l\a\m\v\i\e\7\6\w\e\4\v\z\7\d\i\m\p\y\b\c\a\7\q\m\q\2\0\8\3\m\g\a\2\8\k\d\7\r\d\7\6\r\1\n\o\g\f\h\q\h\o\q\r\w\e\9\z\n\a\7\5\2\u\k\p\6\r\o\0\u\9\c\p\w\9\8\s\l\9\7\t\0\q\u\a\8\7\u\m\c\2\x\1\z\2\o\q\f\y\p\c\5\y\i\s\o\l\3\4\v\w\3\9\t\v\g\1\y\h\1\2\k\g\a\a\a\m\t\f\j\a\0\r\9\6\q\e\r\n\5\t\x\q\s\0\f\m\6\5\i\u\9\m\x\m\x\n\1\u\5\l\v\n\7\t\j\i\8\t\4\h\0\k\e\h\r\g\x\p\7\y\7\j\o\y\w\2\x\s\p\c\k\4\2\z\d\8\l\m\r\4\t\9\h\j\o\j\8\y\h\m\5\2\8\5\5\e\c\1\3\2\e\y\j\k\1\x\5\d\2\g\n\6\v\n\t\s\t\d\w\l\5\n\4\i\m\6\y\0\4\d\p\m\b\7\6\s\6\o\d\q\l\l\h\z\f\0\9\m\8\e\0\5\2\h\e\y\c\h\i\8\e\j\r\o\z\i\3\v\p\o\a\9\c\j\p\r\1\y\d\8\s\h\i\s\s\0\v\y\g\u\h\x\r\f\g\4\4\k\t\7\u\r\9\d\m\e\g\5\l\3\7\4\q\5\1\r\v\k\y\s\4\l\e\h\9\7\j\5\l\p\v\5\v\3\x\x\c\t\5\b\y\x\u\v\f\k\p\j\t\o\9\y\5\o\0 ]] 00:06:30.926 00:06:30.926 real 0m5.082s 00:06:30.926 user 0m2.838s 00:06:30.926 sys 0m1.262s 00:06:30.926 19:44:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.926 ************************************ 00:06:30.926 END TEST dd_flags_misc_forced_aio 00:06:30.926 ************************************ 00:06:30.926 19:44:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:30.926 19:44:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:30.926 19:44:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:30.926 19:44:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:30.926 00:06:30.926 real 0m26.160s 00:06:30.926 user 0m14.023s 00:06:30.926 sys 0m8.518s 00:06:30.926 19:44:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.926 ************************************ 00:06:30.926 END TEST spdk_dd_posix 00:06:30.926 ************************************ 00:06:30.926 19:44:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:30.926 19:44:59 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:30.926 19:44:59 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.926 19:44:59 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.186 19:44:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:31.186 ************************************ 00:06:31.186 START TEST spdk_dd_malloc 00:06:31.186 ************************************ 00:06:31.186 19:44:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:31.186 * Looking for test storage... 00:06:31.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:31.186 19:44:59 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.186 19:44:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:31.187 ************************************ 00:06:31.187 START TEST dd_malloc_copy 00:06:31.187 ************************************ 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:31.187 19:44:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:31.187 [2024-07-24 19:44:59.773360] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:31.187 [2024-07-24 19:44:59.773489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62900 ] 00:06:31.187 { 00:06:31.187 "subsystems": [ 00:06:31.187 { 00:06:31.187 "subsystem": "bdev", 00:06:31.187 "config": [ 00:06:31.187 { 00:06:31.187 "params": { 00:06:31.187 "block_size": 512, 00:06:31.187 "num_blocks": 1048576, 00:06:31.187 "name": "malloc0" 00:06:31.187 }, 00:06:31.187 "method": "bdev_malloc_create" 00:06:31.187 }, 00:06:31.187 { 00:06:31.187 "params": { 00:06:31.187 "block_size": 512, 00:06:31.187 "num_blocks": 1048576, 00:06:31.187 "name": "malloc1" 00:06:31.187 }, 00:06:31.187 "method": "bdev_malloc_create" 00:06:31.187 }, 00:06:31.187 { 00:06:31.187 "method": "bdev_wait_for_examine" 00:06:31.187 } 00:06:31.187 ] 00:06:31.187 } 00:06:31.187 ] 00:06:31.187 } 00:06:31.447 [2024-07-24 19:44:59.920531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.447 [2024-07-24 19:45:00.036052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.447 [2024-07-24 19:45:00.083564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.599  Copying: 216/512 [MB] (216 MBps) Copying: 435/512 [MB] (218 MBps) Copying: 512/512 [MB] (average 217 MBps) 00:06:34.600 00:06:34.600 19:45:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:34.600 19:45:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:34.600 19:45:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:34.600 19:45:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:34.600 { 00:06:34.600 "subsystems": [ 00:06:34.600 { 00:06:34.600 "subsystem": "bdev", 00:06:34.600 "config": [ 00:06:34.600 { 00:06:34.600 "params": { 00:06:34.600 "block_size": 512, 00:06:34.600 "num_blocks": 1048576, 00:06:34.600 "name": "malloc0" 00:06:34.600 }, 00:06:34.600 "method": "bdev_malloc_create" 00:06:34.600 }, 00:06:34.600 { 00:06:34.600 "params": { 00:06:34.600 "block_size": 512, 00:06:34.600 "num_blocks": 1048576, 00:06:34.600 "name": "malloc1" 00:06:34.600 }, 00:06:34.600 "method": "bdev_malloc_create" 00:06:34.600 }, 00:06:34.600 { 00:06:34.600 "method": "bdev_wait_for_examine" 00:06:34.600 } 00:06:34.600 ] 00:06:34.600 } 00:06:34.600 ] 00:06:34.600 } 00:06:34.600 [2024-07-24 19:45:03.262240] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:34.600 [2024-07-24 19:45:03.262327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62942 ] 00:06:34.858 [2024-07-24 19:45:03.404501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.858 [2024-07-24 19:45:03.507418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.115 [2024-07-24 19:45:03.550660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.932  Copying: 226/512 [MB] (226 MBps) Copying: 451/512 [MB] (224 MBps) Copying: 512/512 [MB] (average 225 MBps) 00:06:37.932 00:06:37.932 00:06:37.932 real 0m6.868s 00:06:37.932 user 0m6.003s 00:06:37.932 sys 0m0.707s 00:06:37.932 19:45:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.932 19:45:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:37.932 ************************************ 00:06:37.932 END TEST dd_malloc_copy 00:06:37.932 ************************************ 00:06:38.191 00:06:38.191 real 0m7.031s 00:06:38.191 user 0m6.066s 00:06:38.191 sys 0m0.815s 00:06:38.191 19:45:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.191 19:45:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:38.191 ************************************ 00:06:38.191 END TEST spdk_dd_malloc 00:06:38.191 ************************************ 00:06:38.191 19:45:06 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:38.191 19:45:06 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:38.191 19:45:06 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.191 19:45:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:38.191 ************************************ 00:06:38.191 START TEST spdk_dd_bdev_to_bdev 00:06:38.191 ************************************ 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:38.191 * Looking for test storage... 00:06:38.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:38.191 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:38.192 ************************************ 00:06:38.192 START TEST dd_inflate_file 00:06:38.192 ************************************ 00:06:38.192 19:45:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:38.192 [2024-07-24 19:45:06.834895] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:38.192 [2024-07-24 19:45:06.835026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63041 ] 00:06:38.450 [2024-07-24 19:45:06.982810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.450 [2024-07-24 19:45:07.107621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.708 [2024-07-24 19:45:07.156226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.966  Copying: 64/64 [MB] (average 1777 MBps) 00:06:38.966 00:06:38.966 00:06:38.966 real 0m0.622s 00:06:38.966 user 0m0.365s 00:06:38.966 sys 0m0.286s 00:06:38.966 19:45:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.966 19:45:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:38.966 ************************************ 00:06:38.966 END TEST dd_inflate_file 00:06:38.966 ************************************ 00:06:38.966 19:45:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:38.966 19:45:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:38.966 19:45:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:38.966 19:45:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:38.966 19:45:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:38.966 19:45:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.966 19:45:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:38.966 19:45:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:38.966 19:45:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:38.966 ************************************ 00:06:38.966 START TEST dd_copy_to_out_bdev 00:06:38.966 ************************************ 00:06:38.966 19:45:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:38.966 [2024-07-24 19:45:07.516820] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:38.966 [2024-07-24 19:45:07.516901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63080 ] 00:06:38.966 { 00:06:38.966 "subsystems": [ 00:06:38.966 { 00:06:38.966 "subsystem": "bdev", 00:06:38.966 "config": [ 00:06:38.966 { 00:06:38.966 "params": { 00:06:38.966 "trtype": "pcie", 00:06:38.966 "traddr": "0000:00:10.0", 00:06:38.966 "name": "Nvme0" 00:06:38.966 }, 00:06:38.966 "method": "bdev_nvme_attach_controller" 00:06:38.966 }, 00:06:38.966 { 00:06:38.966 "params": { 00:06:38.966 "trtype": "pcie", 00:06:38.966 "traddr": "0000:00:11.0", 00:06:38.966 "name": "Nvme1" 00:06:38.966 }, 00:06:38.966 "method": "bdev_nvme_attach_controller" 00:06:38.966 }, 00:06:38.966 { 00:06:38.966 "method": "bdev_wait_for_examine" 00:06:38.966 } 00:06:38.966 ] 00:06:38.966 } 00:06:38.966 ] 00:06:38.966 } 00:06:39.224 [2024-07-24 19:45:07.653879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.224 [2024-07-24 19:45:07.764893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.224 [2024-07-24 19:45:07.808065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.598  Copying: 64/64 [MB] (average 71 MBps) 00:06:40.598 00:06:40.598 00:06:40.598 real 0m1.639s 00:06:40.598 user 0m1.411s 00:06:40.598 sys 0m1.221s 00:06:40.598 ************************************ 00:06:40.598 END TEST dd_copy_to_out_bdev 00:06:40.598 ************************************ 00:06:40.598 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.598 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:40.598 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:40.598 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:40.598 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.598 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.598 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:40.598 ************************************ 00:06:40.598 START TEST dd_offset_magic 00:06:40.598 ************************************ 00:06:40.598 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:06:40.598 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:40.598 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:40.598 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:40.599 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:40.599 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:40.599 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:40.599 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:40.599 19:45:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:40.599 { 00:06:40.599 "subsystems": [ 00:06:40.599 { 00:06:40.599 "subsystem": "bdev", 00:06:40.599 "config": [ 00:06:40.599 { 00:06:40.599 "params": { 00:06:40.599 "trtype": "pcie", 00:06:40.599 "traddr": "0000:00:10.0", 00:06:40.599 "name": "Nvme0" 00:06:40.599 }, 00:06:40.599 "method": "bdev_nvme_attach_controller" 00:06:40.599 }, 00:06:40.599 { 00:06:40.599 "params": { 00:06:40.599 "trtype": "pcie", 00:06:40.599 "traddr": "0000:00:11.0", 00:06:40.599 "name": "Nvme1" 00:06:40.599 }, 00:06:40.599 "method": "bdev_nvme_attach_controller" 00:06:40.599 }, 00:06:40.599 { 00:06:40.599 "method": "bdev_wait_for_examine" 00:06:40.599 } 00:06:40.599 ] 00:06:40.599 } 00:06:40.599 ] 00:06:40.599 } 00:06:40.599 [2024-07-24 19:45:09.220686] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:40.599 [2024-07-24 19:45:09.220793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63125 ] 00:06:40.856 [2024-07-24 19:45:09.364364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.856 [2024-07-24 19:45:09.481048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.114 [2024-07-24 19:45:09.529666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.426  Copying: 65/65 [MB] (average 928 MBps) 00:06:41.426 00:06:41.426 19:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:41.426 19:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:41.426 19:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:41.426 19:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:41.426 [2024-07-24 19:45:10.045273] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:41.426 [2024-07-24 19:45:10.045359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63140 ] 00:06:41.426 { 00:06:41.426 "subsystems": [ 00:06:41.426 { 00:06:41.426 "subsystem": "bdev", 00:06:41.426 "config": [ 00:06:41.426 { 00:06:41.426 "params": { 00:06:41.426 "trtype": "pcie", 00:06:41.426 "traddr": "0000:00:10.0", 00:06:41.426 "name": "Nvme0" 00:06:41.426 }, 00:06:41.426 "method": "bdev_nvme_attach_controller" 00:06:41.426 }, 00:06:41.426 { 00:06:41.426 "params": { 00:06:41.426 "trtype": "pcie", 00:06:41.426 "traddr": "0000:00:11.0", 00:06:41.426 "name": "Nvme1" 00:06:41.426 }, 00:06:41.426 "method": "bdev_nvme_attach_controller" 00:06:41.427 }, 00:06:41.427 { 00:06:41.427 "method": "bdev_wait_for_examine" 00:06:41.427 } 00:06:41.427 ] 00:06:41.427 } 00:06:41.427 ] 00:06:41.427 } 00:06:41.686 [2024-07-24 19:45:10.180475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.686 [2024-07-24 19:45:10.285294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.686 [2024-07-24 19:45:10.329057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.202  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:42.202 00:06:42.202 19:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:42.202 19:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:42.202 19:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:42.202 19:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:42.202 19:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:42.202 19:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:42.202 19:45:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:42.202 [2024-07-24 19:45:10.751091] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:42.202 [2024-07-24 19:45:10.751481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63156 ] 00:06:42.202 { 00:06:42.202 "subsystems": [ 00:06:42.202 { 00:06:42.202 "subsystem": "bdev", 00:06:42.202 "config": [ 00:06:42.202 { 00:06:42.202 "params": { 00:06:42.202 "trtype": "pcie", 00:06:42.202 "traddr": "0000:00:10.0", 00:06:42.202 "name": "Nvme0" 00:06:42.202 }, 00:06:42.202 "method": "bdev_nvme_attach_controller" 00:06:42.202 }, 00:06:42.202 { 00:06:42.202 "params": { 00:06:42.202 "trtype": "pcie", 00:06:42.202 "traddr": "0000:00:11.0", 00:06:42.202 "name": "Nvme1" 00:06:42.202 }, 00:06:42.202 "method": "bdev_nvme_attach_controller" 00:06:42.202 }, 00:06:42.202 { 00:06:42.202 "method": "bdev_wait_for_examine" 00:06:42.202 } 00:06:42.202 ] 00:06:42.202 } 00:06:42.202 ] 00:06:42.202 } 00:06:42.460 [2024-07-24 19:45:10.901022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.460 [2024-07-24 19:45:11.026540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.460 [2024-07-24 19:45:11.076847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.976  Copying: 65/65 [MB] (average 984 MBps) 00:06:42.976 00:06:42.976 19:45:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:42.976 19:45:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:42.976 19:45:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:42.976 19:45:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:42.976 [2024-07-24 19:45:11.592781] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:42.976 [2024-07-24 19:45:11.592863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63176 ] 00:06:42.976 { 00:06:42.976 "subsystems": [ 00:06:42.976 { 00:06:42.976 "subsystem": "bdev", 00:06:42.976 "config": [ 00:06:42.976 { 00:06:42.976 "params": { 00:06:42.976 "trtype": "pcie", 00:06:42.976 "traddr": "0000:00:10.0", 00:06:42.976 "name": "Nvme0" 00:06:42.976 }, 00:06:42.976 "method": "bdev_nvme_attach_controller" 00:06:42.976 }, 00:06:42.976 { 00:06:42.976 "params": { 00:06:42.976 "trtype": "pcie", 00:06:42.976 "traddr": "0000:00:11.0", 00:06:42.976 "name": "Nvme1" 00:06:42.976 }, 00:06:42.976 "method": "bdev_nvme_attach_controller" 00:06:42.976 }, 00:06:42.976 { 00:06:42.976 "method": "bdev_wait_for_examine" 00:06:42.976 } 00:06:42.976 ] 00:06:42.976 } 00:06:42.976 ] 00:06:42.976 } 00:06:43.234 [2024-07-24 19:45:11.726768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.234 [2024-07-24 19:45:11.834373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.234 [2024-07-24 19:45:11.878922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.749  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:43.749 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:43.749 00:06:43.749 real 0m3.086s 00:06:43.749 user 0m2.240s 00:06:43.749 sys 0m0.848s 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.749 ************************************ 00:06:43.749 END TEST dd_offset_magic 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:43.749 ************************************ 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:43.749 19:45:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:43.749 [2024-07-24 19:45:12.359208] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:43.749 [2024-07-24 19:45:12.359519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63212 ] 00:06:43.749 { 00:06:43.749 "subsystems": [ 00:06:43.749 { 00:06:43.749 "subsystem": "bdev", 00:06:43.749 "config": [ 00:06:43.749 { 00:06:43.749 "params": { 00:06:43.749 "trtype": "pcie", 00:06:43.749 "traddr": "0000:00:10.0", 00:06:43.749 "name": "Nvme0" 00:06:43.749 }, 00:06:43.749 "method": "bdev_nvme_attach_controller" 00:06:43.749 }, 00:06:43.749 { 00:06:43.749 "params": { 00:06:43.749 "trtype": "pcie", 00:06:43.749 "traddr": "0000:00:11.0", 00:06:43.749 "name": "Nvme1" 00:06:43.749 }, 00:06:43.749 "method": "bdev_nvme_attach_controller" 00:06:43.749 }, 00:06:43.749 { 00:06:43.749 "method": "bdev_wait_for_examine" 00:06:43.749 } 00:06:43.749 ] 00:06:43.749 } 00:06:43.749 ] 00:06:43.749 } 00:06:44.006 [2024-07-24 19:45:12.503929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.006 [2024-07-24 19:45:12.625001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.265 [2024-07-24 19:45:12.673989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.522  Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:44.522 00:06:44.522 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:44.522 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:44.522 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.522 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:44.522 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:44.522 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:44.523 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:44.523 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:44.523 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:44.523 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:44.523 [2024-07-24 19:45:13.114346] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:44.523 [2024-07-24 19:45:13.114653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63229 ] 00:06:44.523 { 00:06:44.523 "subsystems": [ 00:06:44.523 { 00:06:44.523 "subsystem": "bdev", 00:06:44.523 "config": [ 00:06:44.523 { 00:06:44.523 "params": { 00:06:44.523 "trtype": "pcie", 00:06:44.523 "traddr": "0000:00:10.0", 00:06:44.523 "name": "Nvme0" 00:06:44.523 }, 00:06:44.523 "method": "bdev_nvme_attach_controller" 00:06:44.523 }, 00:06:44.523 { 00:06:44.523 "params": { 00:06:44.523 "trtype": "pcie", 00:06:44.523 "traddr": "0000:00:11.0", 00:06:44.523 "name": "Nvme1" 00:06:44.523 }, 00:06:44.523 "method": "bdev_nvme_attach_controller" 00:06:44.523 }, 00:06:44.523 { 00:06:44.523 "method": "bdev_wait_for_examine" 00:06:44.523 } 00:06:44.523 ] 00:06:44.523 } 00:06:44.523 ] 00:06:44.523 } 00:06:44.781 [2024-07-24 19:45:13.262379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.781 [2024-07-24 19:45:13.383104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.781 [2024-07-24 19:45:13.431765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.296  Copying: 5120/5120 [kB] (average 714 MBps) 00:06:45.296 00:06:45.296 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:45.554 ************************************ 00:06:45.554 END TEST spdk_dd_bdev_to_bdev 00:06:45.554 ************************************ 00:06:45.554 00:06:45.554 real 0m7.302s 00:06:45.554 user 0m5.345s 00:06:45.554 sys 0m3.114s 00:06:45.554 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.554 19:45:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:45.554 19:45:14 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:45.554 19:45:14 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:45.554 19:45:14 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.554 19:45:14 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.554 19:45:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:45.554 ************************************ 00:06:45.554 START TEST spdk_dd_uring 00:06:45.554 ************************************ 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:45.554 * Looking for test storage... 00:06:45.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:45.554 ************************************ 00:06:45.554 START TEST dd_uring_copy 00:06:45.554 ************************************ 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:45.554 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:45.555 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:45.555 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:45.555 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:45.555 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:45.555 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:45.555 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:45.555 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:45.555 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:45.555 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:45.555 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=mlglxr18b2k86xymoqts37kx6y1p1mpo5bbfpbjua4q0qpy0jp2qupyfu1nov4dthdu7gapzobulcu1hkkgi95flzbtkrs2h4qmvt56s4f3p2q28s4psd526lg5d03d79msec90uu933ymnh59twgcpeob2rd9ookoytxfenbqm0brdysqpm3181f1q3kj88agoa3rce18mtwr14eaal1djoq5yj3unqnaec9e3eonxs0gskvo1tsowm5lwygh7r9olkqb22eeflf3mi3asad2g1sdi0rxue60fbbdeskev3bags1ajrurmcqt2t092ob3sx9nvuphcpo5q8ukbz2h36zmekfwfz0l6igjccbyv8yf72yrpvsds0yncnhhqiceq0l12mckrer43pn4jpb6bdvz7xnlo56449jnjw30hcqdmk8rrvmdpor61ladmi04claxi3u3ph9q9w3e3tgp0bojnhez9cek81temxxgv732plg1ht8f8k73hy205vi98gfgbliy4agl8awcf675d19nu7lkjfdbh6slhcsw5mqwtrqs9ew0wlr09cd7qu741xxwqzoq2h0pufs4vvlsv0ysib1ri77ph7go110uqii0gm946gt8yskp5uwxww0n4n7mfwtqus9yg5og4vot3qj4ef88qcl2uxeyc20zvewjzc86mm5sq9htq17kzoe42nmielv8r9ve1dd71vdkr0q3xhy25iceq312r0eh6v8l4x4y40yj4ss319txfhkiatbte9gtca584hwzsy04rwf4k78ytbshhw24q5prporl5e76ynmu39x6qflzce378y2m6l8pdx15jyk3iqdakoaq29cui7qdw5nf78ejj63wdcxnmh568wep0ryznda421gy98ftmg9nt0btphelkghioejdihwykzfy0v2x2tg5cvs5gss0vuxr5svbe5dzbkbyvei1ul61x9jswxu26d1esiqagaul11fcwmcs9i9lcnat2qe9o8xu3689t0 00:06:45.555 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo mlglxr18b2k86xymoqts37kx6y1p1mpo5bbfpbjua4q0qpy0jp2qupyfu1nov4dthdu7gapzobulcu1hkkgi95flzbtkrs2h4qmvt56s4f3p2q28s4psd526lg5d03d79msec90uu933ymnh59twgcpeob2rd9ookoytxfenbqm0brdysqpm3181f1q3kj88agoa3rce18mtwr14eaal1djoq5yj3unqnaec9e3eonxs0gskvo1tsowm5lwygh7r9olkqb22eeflf3mi3asad2g1sdi0rxue60fbbdeskev3bags1ajrurmcqt2t092ob3sx9nvuphcpo5q8ukbz2h36zmekfwfz0l6igjccbyv8yf72yrpvsds0yncnhhqiceq0l12mckrer43pn4jpb6bdvz7xnlo56449jnjw30hcqdmk8rrvmdpor61ladmi04claxi3u3ph9q9w3e3tgp0bojnhez9cek81temxxgv732plg1ht8f8k73hy205vi98gfgbliy4agl8awcf675d19nu7lkjfdbh6slhcsw5mqwtrqs9ew0wlr09cd7qu741xxwqzoq2h0pufs4vvlsv0ysib1ri77ph7go110uqii0gm946gt8yskp5uwxww0n4n7mfwtqus9yg5og4vot3qj4ef88qcl2uxeyc20zvewjzc86mm5sq9htq17kzoe42nmielv8r9ve1dd71vdkr0q3xhy25iceq312r0eh6v8l4x4y40yj4ss319txfhkiatbte9gtca584hwzsy04rwf4k78ytbshhw24q5prporl5e76ynmu39x6qflzce378y2m6l8pdx15jyk3iqdakoaq29cui7qdw5nf78ejj63wdcxnmh568wep0ryznda421gy98ftmg9nt0btphelkghioejdihwykzfy0v2x2tg5cvs5gss0vuxr5svbe5dzbkbyvei1ul61x9jswxu26d1esiqagaul11fcwmcs9i9lcnat2qe9o8xu3689t0 00:06:45.555 19:45:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:45.812 [2024-07-24 19:45:14.250629] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:45.812 [2024-07-24 19:45:14.250734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63300 ] 00:06:45.812 [2024-07-24 19:45:14.391585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.071 [2024-07-24 19:45:14.508909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.071 [2024-07-24 19:45:14.556528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.615  Copying: 511/511 [MB] (average 984 MBps) 00:06:47.615 00:06:47.615 19:45:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:47.615 19:45:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:47.615 19:45:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:47.615 19:45:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.615 [2024-07-24 19:45:15.989455] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:47.615 [2024-07-24 19:45:15.989591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63322 ] 00:06:47.615 { 00:06:47.615 "subsystems": [ 00:06:47.615 { 00:06:47.615 "subsystem": "bdev", 00:06:47.615 "config": [ 00:06:47.615 { 00:06:47.615 "params": { 00:06:47.615 "block_size": 512, 00:06:47.615 "num_blocks": 1048576, 00:06:47.615 "name": "malloc0" 00:06:47.615 }, 00:06:47.615 "method": "bdev_malloc_create" 00:06:47.615 }, 00:06:47.615 { 00:06:47.615 "params": { 00:06:47.615 "filename": "/dev/zram1", 00:06:47.615 "name": "uring0" 00:06:47.615 }, 00:06:47.615 "method": "bdev_uring_create" 00:06:47.615 }, 00:06:47.615 { 00:06:47.615 "method": "bdev_wait_for_examine" 00:06:47.615 } 00:06:47.615 ] 00:06:47.615 } 00:06:47.615 ] 00:06:47.615 } 00:06:47.615 [2024-07-24 19:45:16.135780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.873 [2024-07-24 19:45:16.317665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.874 [2024-07-24 19:45:16.405614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.332  Copying: 199/512 [MB] (199 MBps) Copying: 386/512 [MB] (187 MBps) Copying: 512/512 [MB] (average 199 MBps) 00:06:51.332 00:06:51.332 19:45:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:51.332 19:45:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:51.332 19:45:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:51.332 19:45:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:51.332 { 00:06:51.332 "subsystems": [ 00:06:51.332 { 00:06:51.332 "subsystem": "bdev", 00:06:51.332 "config": [ 00:06:51.332 { 00:06:51.332 "params": { 00:06:51.332 "block_size": 512, 00:06:51.332 "num_blocks": 1048576, 00:06:51.332 "name": "malloc0" 00:06:51.332 }, 00:06:51.332 "method": "bdev_malloc_create" 00:06:51.332 }, 00:06:51.332 { 00:06:51.332 "params": { 00:06:51.332 "filename": "/dev/zram1", 00:06:51.332 "name": "uring0" 00:06:51.332 }, 00:06:51.332 "method": "bdev_uring_create" 00:06:51.332 }, 00:06:51.332 { 00:06:51.332 "method": "bdev_wait_for_examine" 00:06:51.332 } 00:06:51.332 ] 00:06:51.332 } 00:06:51.332 ] 00:06:51.332 } 00:06:51.332 [2024-07-24 19:45:19.759397] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:51.332 [2024-07-24 19:45:19.759530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63373 ] 00:06:51.332 [2024-07-24 19:45:19.907251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.621 [2024-07-24 19:45:20.070931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.621 [2024-07-24 19:45:20.155405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.987  Copying: 148/512 [MB] (148 MBps) Copying: 301/512 [MB] (153 MBps) Copying: 411/512 [MB] (109 MBps) Copying: 512/512 [MB] (average 141 MBps) 00:06:55.987 00:06:55.987 19:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:55.988 19:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ mlglxr18b2k86xymoqts37kx6y1p1mpo5bbfpbjua4q0qpy0jp2qupyfu1nov4dthdu7gapzobulcu1hkkgi95flzbtkrs2h4qmvt56s4f3p2q28s4psd526lg5d03d79msec90uu933ymnh59twgcpeob2rd9ookoytxfenbqm0brdysqpm3181f1q3kj88agoa3rce18mtwr14eaal1djoq5yj3unqnaec9e3eonxs0gskvo1tsowm5lwygh7r9olkqb22eeflf3mi3asad2g1sdi0rxue60fbbdeskev3bags1ajrurmcqt2t092ob3sx9nvuphcpo5q8ukbz2h36zmekfwfz0l6igjccbyv8yf72yrpvsds0yncnhhqiceq0l12mckrer43pn4jpb6bdvz7xnlo56449jnjw30hcqdmk8rrvmdpor61ladmi04claxi3u3ph9q9w3e3tgp0bojnhez9cek81temxxgv732plg1ht8f8k73hy205vi98gfgbliy4agl8awcf675d19nu7lkjfdbh6slhcsw5mqwtrqs9ew0wlr09cd7qu741xxwqzoq2h0pufs4vvlsv0ysib1ri77ph7go110uqii0gm946gt8yskp5uwxww0n4n7mfwtqus9yg5og4vot3qj4ef88qcl2uxeyc20zvewjzc86mm5sq9htq17kzoe42nmielv8r9ve1dd71vdkr0q3xhy25iceq312r0eh6v8l4x4y40yj4ss319txfhkiatbte9gtca584hwzsy04rwf4k78ytbshhw24q5prporl5e76ynmu39x6qflzce378y2m6l8pdx15jyk3iqdakoaq29cui7qdw5nf78ejj63wdcxnmh568wep0ryznda421gy98ftmg9nt0btphelkghioejdihwykzfy0v2x2tg5cvs5gss0vuxr5svbe5dzbkbyvei1ul61x9jswxu26d1esiqagaul11fcwmcs9i9lcnat2qe9o8xu3689t0 == \m\l\g\l\x\r\1\8\b\2\k\8\6\x\y\m\o\q\t\s\3\7\k\x\6\y\1\p\1\m\p\o\5\b\b\f\p\b\j\u\a\4\q\0\q\p\y\0\j\p\2\q\u\p\y\f\u\1\n\o\v\4\d\t\h\d\u\7\g\a\p\z\o\b\u\l\c\u\1\h\k\k\g\i\9\5\f\l\z\b\t\k\r\s\2\h\4\q\m\v\t\5\6\s\4\f\3\p\2\q\2\8\s\4\p\s\d\5\2\6\l\g\5\d\0\3\d\7\9\m\s\e\c\9\0\u\u\9\3\3\y\m\n\h\5\9\t\w\g\c\p\e\o\b\2\r\d\9\o\o\k\o\y\t\x\f\e\n\b\q\m\0\b\r\d\y\s\q\p\m\3\1\8\1\f\1\q\3\k\j\8\8\a\g\o\a\3\r\c\e\1\8\m\t\w\r\1\4\e\a\a\l\1\d\j\o\q\5\y\j\3\u\n\q\n\a\e\c\9\e\3\e\o\n\x\s\0\g\s\k\v\o\1\t\s\o\w\m\5\l\w\y\g\h\7\r\9\o\l\k\q\b\2\2\e\e\f\l\f\3\m\i\3\a\s\a\d\2\g\1\s\d\i\0\r\x\u\e\6\0\f\b\b\d\e\s\k\e\v\3\b\a\g\s\1\a\j\r\u\r\m\c\q\t\2\t\0\9\2\o\b\3\s\x\9\n\v\u\p\h\c\p\o\5\q\8\u\k\b\z\2\h\3\6\z\m\e\k\f\w\f\z\0\l\6\i\g\j\c\c\b\y\v\8\y\f\7\2\y\r\p\v\s\d\s\0\y\n\c\n\h\h\q\i\c\e\q\0\l\1\2\m\c\k\r\e\r\4\3\p\n\4\j\p\b\6\b\d\v\z\7\x\n\l\o\5\6\4\4\9\j\n\j\w\3\0\h\c\q\d\m\k\8\r\r\v\m\d\p\o\r\6\1\l\a\d\m\i\0\4\c\l\a\x\i\3\u\3\p\h\9\q\9\w\3\e\3\t\g\p\0\b\o\j\n\h\e\z\9\c\e\k\8\1\t\e\m\x\x\g\v\7\3\2\p\l\g\1\h\t\8\f\8\k\7\3\h\y\2\0\5\v\i\9\8\g\f\g\b\l\i\y\4\a\g\l\8\a\w\c\f\6\7\5\d\1\9\n\u\7\l\k\j\f\d\b\h\6\s\l\h\c\s\w\5\m\q\w\t\r\q\s\9\e\w\0\w\l\r\0\9\c\d\7\q\u\7\4\1\x\x\w\q\z\o\q\2\h\0\p\u\f\s\4\v\v\l\s\v\0\y\s\i\b\1\r\i\7\7\p\h\7\g\o\1\1\0\u\q\i\i\0\g\m\9\4\6\g\t\8\y\s\k\p\5\u\w\x\w\w\0\n\4\n\7\m\f\w\t\q\u\s\9\y\g\5\o\g\4\v\o\t\3\q\j\4\e\f\8\8\q\c\l\2\u\x\e\y\c\2\0\z\v\e\w\j\z\c\8\6\m\m\5\s\q\9\h\t\q\1\7\k\z\o\e\4\2\n\m\i\e\l\v\8\r\9\v\e\1\d\d\7\1\v\d\k\r\0\q\3\x\h\y\2\5\i\c\e\q\3\1\2\r\0\e\h\6\v\8\l\4\x\4\y\4\0\y\j\4\s\s\3\1\9\t\x\f\h\k\i\a\t\b\t\e\9\g\t\c\a\5\8\4\h\w\z\s\y\0\4\r\w\f\4\k\7\8\y\t\b\s\h\h\w\2\4\q\5\p\r\p\o\r\l\5\e\7\6\y\n\m\u\3\9\x\6\q\f\l\z\c\e\3\7\8\y\2\m\6\l\8\p\d\x\1\5\j\y\k\3\i\q\d\a\k\o\a\q\2\9\c\u\i\7\q\d\w\5\n\f\7\8\e\j\j\6\3\w\d\c\x\n\m\h\5\6\8\w\e\p\0\r\y\z\n\d\a\4\2\1\g\y\9\8\f\t\m\g\9\n\t\0\b\t\p\h\e\l\k\g\h\i\o\e\j\d\i\h\w\y\k\z\f\y\0\v\2\x\2\t\g\5\c\v\s\5\g\s\s\0\v\u\x\r\5\s\v\b\e\5\d\z\b\k\b\y\v\e\i\1\u\l\6\1\x\9\j\s\w\x\u\2\6\d\1\e\s\i\q\a\g\a\u\l\1\1\f\c\w\m\c\s\9\i\9\l\c\n\a\t\2\q\e\9\o\8\x\u\3\6\8\9\t\0 ]] 00:06:55.988 19:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:55.988 19:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ mlglxr18b2k86xymoqts37kx6y1p1mpo5bbfpbjua4q0qpy0jp2qupyfu1nov4dthdu7gapzobulcu1hkkgi95flzbtkrs2h4qmvt56s4f3p2q28s4psd526lg5d03d79msec90uu933ymnh59twgcpeob2rd9ookoytxfenbqm0brdysqpm3181f1q3kj88agoa3rce18mtwr14eaal1djoq5yj3unqnaec9e3eonxs0gskvo1tsowm5lwygh7r9olkqb22eeflf3mi3asad2g1sdi0rxue60fbbdeskev3bags1ajrurmcqt2t092ob3sx9nvuphcpo5q8ukbz2h36zmekfwfz0l6igjccbyv8yf72yrpvsds0yncnhhqiceq0l12mckrer43pn4jpb6bdvz7xnlo56449jnjw30hcqdmk8rrvmdpor61ladmi04claxi3u3ph9q9w3e3tgp0bojnhez9cek81temxxgv732plg1ht8f8k73hy205vi98gfgbliy4agl8awcf675d19nu7lkjfdbh6slhcsw5mqwtrqs9ew0wlr09cd7qu741xxwqzoq2h0pufs4vvlsv0ysib1ri77ph7go110uqii0gm946gt8yskp5uwxww0n4n7mfwtqus9yg5og4vot3qj4ef88qcl2uxeyc20zvewjzc86mm5sq9htq17kzoe42nmielv8r9ve1dd71vdkr0q3xhy25iceq312r0eh6v8l4x4y40yj4ss319txfhkiatbte9gtca584hwzsy04rwf4k78ytbshhw24q5prporl5e76ynmu39x6qflzce378y2m6l8pdx15jyk3iqdakoaq29cui7qdw5nf78ejj63wdcxnmh568wep0ryznda421gy98ftmg9nt0btphelkghioejdihwykzfy0v2x2tg5cvs5gss0vuxr5svbe5dzbkbyvei1ul61x9jswxu26d1esiqagaul11fcwmcs9i9lcnat2qe9o8xu3689t0 == \m\l\g\l\x\r\1\8\b\2\k\8\6\x\y\m\o\q\t\s\3\7\k\x\6\y\1\p\1\m\p\o\5\b\b\f\p\b\j\u\a\4\q\0\q\p\y\0\j\p\2\q\u\p\y\f\u\1\n\o\v\4\d\t\h\d\u\7\g\a\p\z\o\b\u\l\c\u\1\h\k\k\g\i\9\5\f\l\z\b\t\k\r\s\2\h\4\q\m\v\t\5\6\s\4\f\3\p\2\q\2\8\s\4\p\s\d\5\2\6\l\g\5\d\0\3\d\7\9\m\s\e\c\9\0\u\u\9\3\3\y\m\n\h\5\9\t\w\g\c\p\e\o\b\2\r\d\9\o\o\k\o\y\t\x\f\e\n\b\q\m\0\b\r\d\y\s\q\p\m\3\1\8\1\f\1\q\3\k\j\8\8\a\g\o\a\3\r\c\e\1\8\m\t\w\r\1\4\e\a\a\l\1\d\j\o\q\5\y\j\3\u\n\q\n\a\e\c\9\e\3\e\o\n\x\s\0\g\s\k\v\o\1\t\s\o\w\m\5\l\w\y\g\h\7\r\9\o\l\k\q\b\2\2\e\e\f\l\f\3\m\i\3\a\s\a\d\2\g\1\s\d\i\0\r\x\u\e\6\0\f\b\b\d\e\s\k\e\v\3\b\a\g\s\1\a\j\r\u\r\m\c\q\t\2\t\0\9\2\o\b\3\s\x\9\n\v\u\p\h\c\p\o\5\q\8\u\k\b\z\2\h\3\6\z\m\e\k\f\w\f\z\0\l\6\i\g\j\c\c\b\y\v\8\y\f\7\2\y\r\p\v\s\d\s\0\y\n\c\n\h\h\q\i\c\e\q\0\l\1\2\m\c\k\r\e\r\4\3\p\n\4\j\p\b\6\b\d\v\z\7\x\n\l\o\5\6\4\4\9\j\n\j\w\3\0\h\c\q\d\m\k\8\r\r\v\m\d\p\o\r\6\1\l\a\d\m\i\0\4\c\l\a\x\i\3\u\3\p\h\9\q\9\w\3\e\3\t\g\p\0\b\o\j\n\h\e\z\9\c\e\k\8\1\t\e\m\x\x\g\v\7\3\2\p\l\g\1\h\t\8\f\8\k\7\3\h\y\2\0\5\v\i\9\8\g\f\g\b\l\i\y\4\a\g\l\8\a\w\c\f\6\7\5\d\1\9\n\u\7\l\k\j\f\d\b\h\6\s\l\h\c\s\w\5\m\q\w\t\r\q\s\9\e\w\0\w\l\r\0\9\c\d\7\q\u\7\4\1\x\x\w\q\z\o\q\2\h\0\p\u\f\s\4\v\v\l\s\v\0\y\s\i\b\1\r\i\7\7\p\h\7\g\o\1\1\0\u\q\i\i\0\g\m\9\4\6\g\t\8\y\s\k\p\5\u\w\x\w\w\0\n\4\n\7\m\f\w\t\q\u\s\9\y\g\5\o\g\4\v\o\t\3\q\j\4\e\f\8\8\q\c\l\2\u\x\e\y\c\2\0\z\v\e\w\j\z\c\8\6\m\m\5\s\q\9\h\t\q\1\7\k\z\o\e\4\2\n\m\i\e\l\v\8\r\9\v\e\1\d\d\7\1\v\d\k\r\0\q\3\x\h\y\2\5\i\c\e\q\3\1\2\r\0\e\h\6\v\8\l\4\x\4\y\4\0\y\j\4\s\s\3\1\9\t\x\f\h\k\i\a\t\b\t\e\9\g\t\c\a\5\8\4\h\w\z\s\y\0\4\r\w\f\4\k\7\8\y\t\b\s\h\h\w\2\4\q\5\p\r\p\o\r\l\5\e\7\6\y\n\m\u\3\9\x\6\q\f\l\z\c\e\3\7\8\y\2\m\6\l\8\p\d\x\1\5\j\y\k\3\i\q\d\a\k\o\a\q\2\9\c\u\i\7\q\d\w\5\n\f\7\8\e\j\j\6\3\w\d\c\x\n\m\h\5\6\8\w\e\p\0\r\y\z\n\d\a\4\2\1\g\y\9\8\f\t\m\g\9\n\t\0\b\t\p\h\e\l\k\g\h\i\o\e\j\d\i\h\w\y\k\z\f\y\0\v\2\x\2\t\g\5\c\v\s\5\g\s\s\0\v\u\x\r\5\s\v\b\e\5\d\z\b\k\b\y\v\e\i\1\u\l\6\1\x\9\j\s\w\x\u\2\6\d\1\e\s\i\q\a\g\a\u\l\1\1\f\c\w\m\c\s\9\i\9\l\c\n\a\t\2\q\e\9\o\8\x\u\3\6\8\9\t\0 ]] 00:06:55.988 19:45:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:56.555 19:45:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:56.555 19:45:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:56.555 19:45:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:56.555 19:45:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:56.555 { 00:06:56.555 "subsystems": [ 00:06:56.555 { 00:06:56.555 "subsystem": "bdev", 00:06:56.555 "config": [ 00:06:56.555 { 00:06:56.555 "params": { 00:06:56.555 "block_size": 512, 00:06:56.555 "num_blocks": 1048576, 00:06:56.555 "name": "malloc0" 00:06:56.555 }, 00:06:56.555 "method": "bdev_malloc_create" 00:06:56.555 }, 00:06:56.555 { 00:06:56.555 "params": { 00:06:56.555 "filename": "/dev/zram1", 00:06:56.555 "name": "uring0" 00:06:56.555 }, 00:06:56.555 "method": "bdev_uring_create" 00:06:56.555 }, 00:06:56.555 { 00:06:56.555 "method": "bdev_wait_for_examine" 00:06:56.555 } 00:06:56.555 ] 00:06:56.555 } 00:06:56.555 ] 00:06:56.555 } 00:06:56.555 [2024-07-24 19:45:25.097215] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:56.555 [2024-07-24 19:45:25.097360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63459 ] 00:06:56.813 [2024-07-24 19:45:25.246418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.813 [2024-07-24 19:45:25.474076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.071 [2024-07-24 19:45:25.572819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.907  Copying: 180/512 [MB] (180 MBps) Copying: 342/512 [MB] (161 MBps) Copying: 505/512 [MB] (163 MBps) Copying: 512/512 [MB] (average 168 MBps) 00:07:00.907 00:07:00.907 19:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:00.907 19:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:00.907 19:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:00.907 19:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:00.907 19:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:00.907 19:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:00.907 19:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:00.907 19:45:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.165 [2024-07-24 19:45:29.592689] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:01.165 [2024-07-24 19:45:29.592786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63526 ] 00:07:01.165 { 00:07:01.165 "subsystems": [ 00:07:01.165 { 00:07:01.165 "subsystem": "bdev", 00:07:01.165 "config": [ 00:07:01.165 { 00:07:01.165 "params": { 00:07:01.165 "block_size": 512, 00:07:01.165 "num_blocks": 1048576, 00:07:01.165 "name": "malloc0" 00:07:01.165 }, 00:07:01.165 "method": "bdev_malloc_create" 00:07:01.165 }, 00:07:01.165 { 00:07:01.165 "params": { 00:07:01.165 "filename": "/dev/zram1", 00:07:01.165 "name": "uring0" 00:07:01.165 }, 00:07:01.165 "method": "bdev_uring_create" 00:07:01.165 }, 00:07:01.165 { 00:07:01.165 "params": { 00:07:01.165 "name": "uring0" 00:07:01.165 }, 00:07:01.165 "method": "bdev_uring_delete" 00:07:01.165 }, 00:07:01.165 { 00:07:01.165 "method": "bdev_wait_for_examine" 00:07:01.165 } 00:07:01.165 ] 00:07:01.165 } 00:07:01.165 ] 00:07:01.165 } 00:07:01.165 [2024-07-24 19:45:29.735262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.422 [2024-07-24 19:45:29.908802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.422 [2024-07-24 19:45:29.994502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.247  Copying: 0/0 [B] (average 0 Bps) 00:07:02.247 00:07:02.247 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:02.247 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:02.247 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:02.247 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:02.247 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:07:02.247 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:02.247 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:02.247 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.247 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.247 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.247 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.247 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.549 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.550 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.550 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.550 19:45:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:02.550 { 00:07:02.550 "subsystems": [ 00:07:02.550 { 00:07:02.550 "subsystem": "bdev", 00:07:02.550 "config": [ 00:07:02.550 { 00:07:02.550 "params": { 00:07:02.550 "block_size": 512, 00:07:02.550 "num_blocks": 1048576, 00:07:02.550 "name": "malloc0" 00:07:02.550 }, 00:07:02.550 "method": "bdev_malloc_create" 00:07:02.550 }, 00:07:02.550 { 00:07:02.550 "params": { 00:07:02.550 "filename": "/dev/zram1", 00:07:02.550 "name": "uring0" 00:07:02.550 }, 00:07:02.550 "method": "bdev_uring_create" 00:07:02.550 }, 00:07:02.550 { 00:07:02.550 "params": { 00:07:02.550 "name": "uring0" 00:07:02.550 }, 00:07:02.550 "method": "bdev_uring_delete" 00:07:02.550 }, 00:07:02.550 { 00:07:02.550 "method": "bdev_wait_for_examine" 00:07:02.550 } 00:07:02.550 ] 00:07:02.550 } 00:07:02.550 ] 00:07:02.550 } 00:07:02.550 [2024-07-24 19:45:30.962718] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:02.550 [2024-07-24 19:45:30.962815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63557 ] 00:07:02.550 [2024-07-24 19:45:31.106887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.810 [2024-07-24 19:45:31.277014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.810 [2024-07-24 19:45:31.363444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.069 [2024-07-24 19:45:31.655435] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:03.069 [2024-07-24 19:45:31.655504] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:03.069 [2024-07-24 19:45:31.655514] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:03.069 [2024-07-24 19:45:31.655526] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.636 [2024-07-24 19:45:32.142749] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:03.636 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:07:03.636 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.636 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:07:03.636 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:07:03.636 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:07:03.636 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.636 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:03.636 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:03.636 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:03.636 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:03.636 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:03.895 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:03.895 00:07:03.895 real 0m18.386s 00:07:03.895 user 0m12.167s 00:07:03.895 sys 0m15.444s 00:07:03.895 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.895 19:45:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:03.895 ************************************ 00:07:03.895 END TEST dd_uring_copy 00:07:03.895 ************************************ 00:07:04.153 00:07:04.153 real 0m18.549s 00:07:04.153 user 0m12.219s 00:07:04.153 sys 0m15.557s 00:07:04.153 19:45:32 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.153 ************************************ 00:07:04.153 END TEST spdk_dd_uring 00:07:04.153 ************************************ 00:07:04.153 19:45:32 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:04.153 19:45:32 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:04.153 19:45:32 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.153 19:45:32 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.153 19:45:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:04.153 ************************************ 00:07:04.153 START TEST spdk_dd_sparse 00:07:04.153 ************************************ 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:04.153 * Looking for test storage... 00:07:04.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:04.153 1+0 records in 00:07:04.153 1+0 records out 00:07:04.153 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00950037 s, 441 MB/s 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:04.153 1+0 records in 00:07:04.153 1+0 records out 00:07:04.153 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00868197 s, 483 MB/s 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:04.153 1+0 records in 00:07:04.153 1+0 records out 00:07:04.153 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00788014 s, 532 MB/s 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:04.153 ************************************ 00:07:04.153 START TEST dd_sparse_file_to_file 00:07:04.153 ************************************ 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:04.153 19:45:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:04.412 [2024-07-24 19:45:32.869260] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:04.412 [2024-07-24 19:45:32.869380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63652 ] 00:07:04.412 { 00:07:04.412 "subsystems": [ 00:07:04.412 { 00:07:04.412 "subsystem": "bdev", 00:07:04.412 "config": [ 00:07:04.412 { 00:07:04.412 "params": { 00:07:04.412 "block_size": 4096, 00:07:04.412 "filename": "dd_sparse_aio_disk", 00:07:04.412 "name": "dd_aio" 00:07:04.412 }, 00:07:04.412 "method": "bdev_aio_create" 00:07:04.412 }, 00:07:04.412 { 00:07:04.412 "params": { 00:07:04.412 "lvs_name": "dd_lvstore", 00:07:04.412 "bdev_name": "dd_aio" 00:07:04.412 }, 00:07:04.412 "method": "bdev_lvol_create_lvstore" 00:07:04.412 }, 00:07:04.412 { 00:07:04.412 "method": "bdev_wait_for_examine" 00:07:04.412 } 00:07:04.412 ] 00:07:04.412 } 00:07:04.412 ] 00:07:04.412 } 00:07:04.412 [2024-07-24 19:45:33.017637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.670 [2024-07-24 19:45:33.192568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.670 [2024-07-24 19:45:33.282330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.186  Copying: 12/36 [MB] (average 857 MBps) 00:07:05.186 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:05.186 00:07:05.186 real 0m0.995s 00:07:05.186 user 0m0.621s 00:07:05.186 sys 0m0.533s 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:05.186 ************************************ 00:07:05.186 END TEST dd_sparse_file_to_file 00:07:05.186 ************************************ 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.186 19:45:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:05.444 ************************************ 00:07:05.444 START TEST dd_sparse_file_to_bdev 00:07:05.444 ************************************ 00:07:05.444 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:07:05.444 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:05.444 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:05.444 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:05.444 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:05.444 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:05.444 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:05.444 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:05.444 19:45:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:05.444 { 00:07:05.444 "subsystems": [ 00:07:05.444 { 00:07:05.444 "subsystem": "bdev", 00:07:05.444 "config": [ 00:07:05.444 { 00:07:05.444 "params": { 00:07:05.444 "block_size": 4096, 00:07:05.444 "filename": "dd_sparse_aio_disk", 00:07:05.444 "name": "dd_aio" 00:07:05.444 }, 00:07:05.444 "method": "bdev_aio_create" 00:07:05.444 }, 00:07:05.444 { 00:07:05.444 "params": { 00:07:05.444 "lvs_name": "dd_lvstore", 00:07:05.444 "lvol_name": "dd_lvol", 00:07:05.444 "size_in_mib": 36, 00:07:05.444 "thin_provision": true 00:07:05.444 }, 00:07:05.444 "method": "bdev_lvol_create" 00:07:05.444 }, 00:07:05.444 { 00:07:05.444 "method": "bdev_wait_for_examine" 00:07:05.444 } 00:07:05.444 ] 00:07:05.444 } 00:07:05.444 ] 00:07:05.444 } 00:07:05.444 [2024-07-24 19:45:33.921601] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:05.444 [2024-07-24 19:45:33.921709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63700 ] 00:07:05.444 [2024-07-24 19:45:34.067887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.702 [2024-07-24 19:45:34.228317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.702 [2024-07-24 19:45:34.311484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.218  Copying: 12/36 [MB] (average 461 MBps) 00:07:06.218 00:07:06.218 ************************************ 00:07:06.218 END TEST dd_sparse_file_to_bdev 00:07:06.218 ************************************ 00:07:06.218 00:07:06.218 real 0m0.952s 00:07:06.218 user 0m0.602s 00:07:06.218 sys 0m0.513s 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:06.218 ************************************ 00:07:06.218 START TEST dd_sparse_bdev_to_file 00:07:06.218 ************************************ 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:06.218 19:45:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:06.476 [2024-07-24 19:45:34.912387] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:06.476 [2024-07-24 19:45:34.912476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63738 ] 00:07:06.476 { 00:07:06.476 "subsystems": [ 00:07:06.476 { 00:07:06.476 "subsystem": "bdev", 00:07:06.476 "config": [ 00:07:06.476 { 00:07:06.476 "params": { 00:07:06.476 "block_size": 4096, 00:07:06.476 "filename": "dd_sparse_aio_disk", 00:07:06.476 "name": "dd_aio" 00:07:06.476 }, 00:07:06.476 "method": "bdev_aio_create" 00:07:06.476 }, 00:07:06.476 { 00:07:06.476 "method": "bdev_wait_for_examine" 00:07:06.476 } 00:07:06.476 ] 00:07:06.476 } 00:07:06.476 ] 00:07:06.476 } 00:07:06.476 [2024-07-24 19:45:35.050143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.735 [2024-07-24 19:45:35.171699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.735 [2024-07-24 19:45:35.221398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.993  Copying: 12/36 [MB] (average 923 MBps) 00:07:06.993 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:06.993 00:07:06.993 real 0m0.667s 00:07:06.993 user 0m0.408s 00:07:06.993 sys 0m0.313s 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:06.993 ************************************ 00:07:06.993 END TEST dd_sparse_bdev_to_file 00:07:06.993 ************************************ 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:06.993 ************************************ 00:07:06.993 END TEST spdk_dd_sparse 00:07:06.993 ************************************ 00:07:06.993 00:07:06.993 real 0m2.968s 00:07:06.993 user 0m1.753s 00:07:06.993 sys 0m1.597s 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.993 19:45:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:07.253 19:45:35 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:07.253 19:45:35 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.253 19:45:35 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.253 19:45:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:07.253 ************************************ 00:07:07.253 START TEST spdk_dd_negative 00:07:07.253 ************************************ 00:07:07.253 19:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:07.253 * Looking for test storage... 00:07:07.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:07.253 19:45:35 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:07.253 19:45:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.253 19:45:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.253 19:45:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.254 ************************************ 00:07:07.254 START TEST dd_invalid_arguments 00:07:07.254 ************************************ 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.254 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:07.254 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:07.254 00:07:07.254 CPU options: 00:07:07.254 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:07.254 (like [0,1,10]) 00:07:07.254 --lcores lcore to CPU mapping list. The list is in the format: 00:07:07.254 [<,lcores[@CPUs]>...] 00:07:07.254 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:07.254 Within the group, '-' is used for range separator, 00:07:07.254 ',' is used for single number separator. 00:07:07.254 '( )' can be omitted for single element group, 00:07:07.254 '@' can be omitted if cpus and lcores have the same value 00:07:07.254 --disable-cpumask-locks Disable CPU core lock files. 00:07:07.254 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:07.254 pollers in the app support interrupt mode) 00:07:07.254 -p, --main-core main (primary) core for DPDK 00:07:07.254 00:07:07.254 Configuration options: 00:07:07.254 -c, --config, --json JSON config file 00:07:07.254 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:07.254 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:07.254 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:07.254 --rpcs-allowed comma-separated list of permitted RPCS 00:07:07.254 --json-ignore-init-errors don't exit on invalid config entry 00:07:07.254 00:07:07.254 Memory options: 00:07:07.254 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:07.254 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:07.254 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:07.254 -R, --huge-unlink unlink huge files after initialization 00:07:07.254 -n, --mem-channels number of memory channels used for DPDK 00:07:07.254 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:07.254 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:07.254 --no-huge run without using hugepages 00:07:07.254 -i, --shm-id shared memory ID (optional) 00:07:07.254 -g, --single-file-segments force creating just one hugetlbfs file 00:07:07.254 00:07:07.254 PCI options: 00:07:07.254 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:07.254 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:07.254 -u, --no-pci disable PCI access 00:07:07.254 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:07.254 00:07:07.254 Log options: 00:07:07.254 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:07.254 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:07.254 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:07.254 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:07.254 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:07.254 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:07.254 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:07.254 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:07.254 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:07.254 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:07.254 virtio_vfio_user, vmd) 00:07:07.254 --silence-noticelog disable notice level logging to stderr 00:07:07.254 00:07:07.254 Trace options: 00:07:07.254 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:07.254 setting 0 to disable trace (default 32768) 00:07:07.254 Tracepoints vary in size and can use more than one trace entry. 00:07:07.254 -e, --tpoint-group [:] 00:07:07.254 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:07.254 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:07.254 [2024-07-24 19:45:35.855984] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:07.254 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:07.254 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:07.254 a tracepoint group. First tpoint inside a group can be enabled by 00:07:07.254 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:07.254 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:07.254 in /include/spdk_internal/trace_defs.h 00:07:07.254 00:07:07.254 Other options: 00:07:07.254 -h, --help show this usage 00:07:07.254 -v, --version print SPDK version 00:07:07.254 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:07.254 --env-context Opaque context for use of the env implementation 00:07:07.254 00:07:07.254 Application specific: 00:07:07.254 [--------- DD Options ---------] 00:07:07.254 --if Input file. Must specify either --if or --ib. 00:07:07.254 --ib Input bdev. Must specifier either --if or --ib 00:07:07.254 --of Output file. Must specify either --of or --ob. 00:07:07.254 --ob Output bdev. Must specify either --of or --ob. 00:07:07.254 --iflag Input file flags. 00:07:07.254 --oflag Output file flags. 00:07:07.254 --bs I/O unit size (default: 4096) 00:07:07.254 --qd Queue depth (default: 2) 00:07:07.254 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:07.254 --skip Skip this many I/O units at start of input. (default: 0) 00:07:07.254 --seek Skip this many I/O units at start of output. (default: 0) 00:07:07.255 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:07.255 --sparse Enable hole skipping in input target 00:07:07.255 Available iflag and oflag values: 00:07:07.255 append - append mode 00:07:07.255 direct - use direct I/O for data 00:07:07.255 directory - fail unless a directory 00:07:07.255 dsync - use synchronized I/O for data 00:07:07.255 noatime - do not update access time 00:07:07.255 noctty - do not assign controlling terminal from file 00:07:07.255 nofollow - do not follow symlinks 00:07:07.255 nonblock - use non-blocking I/O 00:07:07.255 sync - use synchronized I/O for data and metadata 00:07:07.255 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:07.255 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.255 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.255 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.255 00:07:07.255 real 0m0.077s 00:07:07.255 user 0m0.047s 00:07:07.255 sys 0m0.027s 00:07:07.255 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.255 19:45:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:07.255 ************************************ 00:07:07.255 END TEST dd_invalid_arguments 00:07:07.255 ************************************ 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.514 ************************************ 00:07:07.514 START TEST dd_double_input 00:07:07.514 ************************************ 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.514 19:45:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:07.514 [2024-07-24 19:45:35.990023] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.514 ************************************ 00:07:07.514 END TEST dd_double_input 00:07:07.514 ************************************ 00:07:07.514 00:07:07.514 real 0m0.078s 00:07:07.514 user 0m0.047s 00:07:07.514 sys 0m0.028s 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.514 ************************************ 00:07:07.514 START TEST dd_double_output 00:07:07.514 ************************************ 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.514 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.515 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.515 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.515 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:07.515 [2024-07-24 19:45:36.124263] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:07.515 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:07.515 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.515 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.515 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.515 00:07:07.515 real 0m0.074s 00:07:07.515 user 0m0.049s 00:07:07.515 sys 0m0.023s 00:07:07.515 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.515 19:45:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:07.515 ************************************ 00:07:07.515 END TEST dd_double_output 00:07:07.515 ************************************ 00:07:07.773 19:45:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:07.773 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.774 ************************************ 00:07:07.774 START TEST dd_no_input 00:07:07.774 ************************************ 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:07.774 [2024-07-24 19:45:36.260296] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.774 00:07:07.774 real 0m0.060s 00:07:07.774 user 0m0.033s 00:07:07.774 sys 0m0.026s 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:07.774 ************************************ 00:07:07.774 END TEST dd_no_input 00:07:07.774 ************************************ 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.774 ************************************ 00:07:07.774 START TEST dd_no_output 00:07:07.774 ************************************ 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.774 [2024-07-24 19:45:36.395964] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.774 00:07:07.774 real 0m0.083s 00:07:07.774 user 0m0.045s 00:07:07.774 sys 0m0.037s 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.774 ************************************ 00:07:07.774 END TEST dd_no_output 00:07:07.774 ************************************ 00:07:07.774 19:45:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:08.033 ************************************ 00:07:08.033 START TEST dd_wrong_blocksize 00:07:08.033 ************************************ 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:08.033 [2024-07-24 19:45:36.529770] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.033 ************************************ 00:07:08.033 END TEST dd_wrong_blocksize 00:07:08.033 ************************************ 00:07:08.033 00:07:08.033 real 0m0.076s 00:07:08.033 user 0m0.041s 00:07:08.033 sys 0m0.033s 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:08.033 ************************************ 00:07:08.033 START TEST dd_smaller_blocksize 00:07:08.033 ************************************ 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:08.033 19:45:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:08.033 [2024-07-24 19:45:36.662569] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:08.033 [2024-07-24 19:45:36.662675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63957 ] 00:07:08.292 [2024-07-24 19:45:36.808864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.551 [2024-07-24 19:45:36.986412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.551 [2024-07-24 19:45:37.072526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.118 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:09.118 [2024-07-24 19:45:37.589726] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:09.118 [2024-07-24 19:45:37.590159] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.118 [2024-07-24 19:45:37.774573] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.377 00:07:09.377 real 0m1.320s 00:07:09.377 user 0m0.593s 00:07:09.377 sys 0m0.612s 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.377 ************************************ 00:07:09.377 END TEST dd_smaller_blocksize 00:07:09.377 ************************************ 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:09.377 ************************************ 00:07:09.377 START TEST dd_invalid_count 00:07:09.377 ************************************ 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.377 19:45:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:09.635 [2024-07-24 19:45:38.044700] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:09.635 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:09.635 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.635 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:09.635 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.635 00:07:09.635 real 0m0.081s 00:07:09.635 user 0m0.049s 00:07:09.635 sys 0m0.030s 00:07:09.635 ************************************ 00:07:09.635 END TEST dd_invalid_count 00:07:09.635 ************************************ 00:07:09.635 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.635 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:09.635 19:45:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:09.636 ************************************ 00:07:09.636 START TEST dd_invalid_oflag 00:07:09.636 ************************************ 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:09.636 [2024-07-24 19:45:38.194719] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.636 00:07:09.636 real 0m0.097s 00:07:09.636 user 0m0.058s 00:07:09.636 sys 0m0.037s 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:09.636 ************************************ 00:07:09.636 END TEST dd_invalid_oflag 00:07:09.636 ************************************ 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:09.636 ************************************ 00:07:09.636 START TEST dd_invalid_iflag 00:07:09.636 ************************************ 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.636 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:09.895 [2024-07-24 19:45:38.352036] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:09.895 ************************************ 00:07:09.895 END TEST dd_invalid_iflag 00:07:09.895 ************************************ 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.895 00:07:09.895 real 0m0.083s 00:07:09.895 user 0m0.055s 00:07:09.895 sys 0m0.025s 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:09.895 ************************************ 00:07:09.895 START TEST dd_unknown_flag 00:07:09.895 ************************************ 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:09.895 19:45:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:09.895 [2024-07-24 19:45:38.497123] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:09.895 [2024-07-24 19:45:38.497268] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64054 ] 00:07:10.154 [2024-07-24 19:45:38.644296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.154 [2024-07-24 19:45:38.766386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.154 [2024-07-24 19:45:38.813782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.412 [2024-07-24 19:45:38.851984] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:10.412 [2024-07-24 19:45:38.852086] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.412 [2024-07-24 19:45:38.852177] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:10.412 [2024-07-24 19:45:38.852195] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.412 [2024-07-24 19:45:38.852513] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:10.412 [2024-07-24 19:45:38.852534] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.412 [2024-07-24 19:45:38.852608] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:10.412 [2024-07-24 19:45:38.852621] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:10.412 [2024-07-24 19:45:39.038094] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:10.671 00:07:10.671 real 0m0.764s 00:07:10.671 user 0m0.499s 00:07:10.671 sys 0m0.163s 00:07:10.671 ************************************ 00:07:10.671 END TEST dd_unknown_flag 00:07:10.671 ************************************ 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:10.671 ************************************ 00:07:10.671 START TEST dd_invalid_json 00:07:10.671 ************************************ 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:10.671 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:10.671 [2024-07-24 19:45:39.313723] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:10.671 [2024-07-24 19:45:39.313828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64088 ] 00:07:10.930 [2024-07-24 19:45:39.470224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.189 [2024-07-24 19:45:39.663473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.189 [2024-07-24 19:45:39.663575] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:11.189 [2024-07-24 19:45:39.663593] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:11.189 [2024-07-24 19:45:39.663607] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.189 [2024-07-24 19:45:39.663657] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:11.189 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:11.189 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.189 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:11.189 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:11.189 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:11.189 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.189 00:07:11.189 real 0m0.511s 00:07:11.189 user 0m0.291s 00:07:11.189 sys 0m0.116s 00:07:11.189 ************************************ 00:07:11.189 END TEST dd_invalid_json 00:07:11.189 ************************************ 00:07:11.189 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.189 19:45:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.189 00:07:11.189 real 0m4.143s 00:07:11.189 user 0m2.057s 00:07:11.189 sys 0m1.734s 00:07:11.189 19:45:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.189 19:45:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:11.189 ************************************ 00:07:11.189 END TEST spdk_dd_negative 00:07:11.189 ************************************ 00:07:11.447 00:07:11.447 real 1m30.919s 00:07:11.447 user 0m58.708s 00:07:11.447 sys 0m41.490s 00:07:11.447 ************************************ 00:07:11.447 END TEST spdk_dd 00:07:11.447 ************************************ 00:07:11.447 19:45:39 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.447 19:45:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:11.447 19:45:39 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:11.447 19:45:39 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:11.447 19:45:39 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:11.447 19:45:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:11.447 19:45:39 -- common/autotest_common.sh@10 -- # set +x 00:07:11.447 19:45:39 -- spdk/autotest.sh@266 -- # '[' 1 -eq 1 ']' 00:07:11.447 19:45:39 -- spdk/autotest.sh@267 -- # run_test iscsi_tgt /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:07:11.447 19:45:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.448 19:45:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.448 19:45:39 -- common/autotest_common.sh@10 -- # set +x 00:07:11.448 ************************************ 00:07:11.448 START TEST iscsi_tgt 00:07:11.448 ************************************ 00:07:11.448 19:45:39 iscsi_tgt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:07:11.448 * Looking for test storage... 00:07:11.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # uname -s 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:11.448 Cleaning up iSCSI connection 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@18 -- # iscsicleanup 00:07:11.448 19:45:40 iscsi_tgt -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:07:11.448 19:45:40 iscsi_tgt -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:07:11.448 iscsiadm: No matching sessions found 00:07:11.448 19:45:40 iscsi_tgt -- common/autotest_common.sh@983 -- # true 00:07:11.448 19:45:40 iscsi_tgt -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:07:11.448 iscsiadm: No records found 00:07:11.448 19:45:40 iscsi_tgt -- common/autotest_common.sh@984 -- # true 00:07:11.448 19:45:40 iscsi_tgt -- common/autotest_common.sh@985 -- # rm -rf 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@21 -- # create_veth_interfaces 00:07:11.448 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # ip link set init_br nomaster 00:07:11.706 Cannot find device "init_br" 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # true 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # ip link set tgt_br nomaster 00:07:11.706 Cannot find device "tgt_br" 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # true 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # ip link set tgt_br2 nomaster 00:07:11.706 Cannot find device "tgt_br2" 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # true 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # ip link set init_br down 00:07:11.706 Cannot find device "init_br" 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # true 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # ip link set tgt_br down 00:07:11.706 Cannot find device "tgt_br" 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # true 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # ip link set tgt_br2 down 00:07:11.706 Cannot find device "tgt_br2" 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # true 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # ip link delete iscsi_br type bridge 00:07:11.706 Cannot find device "iscsi_br" 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # true 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # ip link delete spdk_init_int 00:07:11.706 Cannot find device "spdk_init_int" 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # true 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:07:11.706 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # true 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:07:11.706 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # true 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # ip netns del spdk_iscsi_ns 00:07:11.706 Cannot remove namespace file "/var/run/netns/spdk_iscsi_ns": No such file or directory 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # true 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@44 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@47 -- # ip netns add spdk_iscsi_ns 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@50 -- # ip link add spdk_init_int type veth peer name init_br 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@51 -- # ip link add spdk_tgt_int type veth peer name tgt_br 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@52 -- # ip link add spdk_tgt_int2 type veth peer name tgt_br2 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@55 -- # ip link set spdk_tgt_int netns spdk_iscsi_ns 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@56 -- # ip link set spdk_tgt_int2 netns spdk_iscsi_ns 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@59 -- # ip addr add 10.0.0.2/24 dev spdk_init_int 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@60 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.1/24 dev spdk_tgt_int 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@61 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.3/24 dev spdk_tgt_int2 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@64 -- # ip link set spdk_init_int up 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@65 -- # ip link set init_br up 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@66 -- # ip link set tgt_br up 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@67 -- # ip link set tgt_br2 up 00:07:11.706 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@68 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int up 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@69 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int2 up 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@70 -- # ip netns exec spdk_iscsi_ns ip link set lo up 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@73 -- # ip link add iscsi_br type bridge 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@74 -- # ip link set iscsi_br up 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@77 -- # ip link set init_br master iscsi_br 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@78 -- # ip link set tgt_br master iscsi_br 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@79 -- # ip link set tgt_br2 master iscsi_br 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@82 -- # iptables -I INPUT 1 -i spdk_init_int -p tcp --dport 3260 -j ACCEPT 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@83 -- # iptables -A FORWARD -i iscsi_br -o iscsi_br -j ACCEPT 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@86 -- # ping -c 1 10.0.0.1 00:07:11.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:07:11.965 00:07:11.965 --- 10.0.0.1 ping statistics --- 00:07:11.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.965 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@87 -- # ping -c 1 10.0.0.3 00:07:11.965 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:11.965 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:07:11.965 00:07:11.965 --- 10.0.0.3 ping statistics --- 00:07:11.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.965 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@88 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:07:11.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:07:11.965 00:07:11.965 --- 10.0.0.2 ping statistics --- 00:07:11.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.965 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/common.sh@89 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:07:11.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:07:11.965 00:07:11.965 --- 10.0.0.2 ping statistics --- 00:07:11.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.965 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@23 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:07:11.965 19:45:40 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@25 -- # run_test iscsi_tgt_sock /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:07:11.965 19:45:40 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.965 19:45:40 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.965 19:45:40 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:07:11.965 ************************************ 00:07:11.965 START TEST iscsi_tgt_sock 00:07:11.965 ************************************ 00:07:11.965 19:45:40 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:07:12.223 * Looking for test storage... 00:07:12.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@48 -- # iscsitestinit 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@50 -- # HELLO_SOCK_APP='ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/examples/hello_sock' 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@51 -- # SOCAT_APP=socat 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@52 -- # OPENSSL_APP=openssl 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@53 -- # PSK='-N ssl -E 1234567890ABCDEF -I psk.spdk.io' 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@58 -- # timing_enter sock_client 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:07:12.223 Testing client path 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@59 -- # echo 'Testing client path' 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@63 -- # server_pid=64338 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@64 -- # trap 'killprocess $server_pid;iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@62 -- # socat tcp-l:3260,fork,bind=10.0.0.2 exec:/bin/cat 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@66 -- # waitfortcp 64338 10.0.0.2:3260 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@25 -- # local addr=10.0.0.2:3260 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@27 -- # echo 'Waiting for process to start up and listen on address 10.0.0.2:3260...' 00:07:12.223 Waiting for process to start up and listen on address 10.0.0.2:3260... 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@29 -- # xtrace_disable 00:07:12.223 19:45:40 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:07:12.791 [2024-07-24 19:45:41.215346] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:12.791 [2024-07-24 19:45:41.215461] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64342 ] 00:07:12.791 [2024-07-24 19:45:41.363675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.049 [2024-07-24 19:45:41.546579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.049 [2024-07-24 19:45:41.546679] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:13.049 [2024-07-24 19:45:41.546713] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:07:13.049 [2024-07-24 19:45:41.546974] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 42602) 00:07:13.049 [2024-07-24 19:45:41.547067] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:13.985 [2024-07-24 19:45:42.547105] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:13.985 [2024-07-24 19:45:42.547355] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:14.243 [2024-07-24 19:45:42.702096] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:14.244 [2024-07-24 19:45:42.702195] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64372 ] 00:07:14.244 [2024-07-24 19:45:42.846253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.501 [2024-07-24 19:45:43.009346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.501 [2024-07-24 19:45:43.009427] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:14.501 [2024-07-24 19:45:43.009457] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:07:14.501 [2024-07-24 19:45:43.009645] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 49988) 00:07:14.501 [2024-07-24 19:45:43.009726] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:15.438 [2024-07-24 19:45:44.009763] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:15.438 [2024-07-24 19:45:44.010005] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:15.697 [2024-07-24 19:45:44.125924] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:15.697 [2024-07-24 19:45:44.126076] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64391 ] 00:07:15.697 [2024-07-24 19:45:44.271634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.956 [2024-07-24 19:45:44.390777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.956 [2024-07-24 19:45:44.390867] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:15.956 [2024-07-24 19:45:44.390898] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:07:15.956 [2024-07-24 19:45:44.391242] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 49992) 00:07:15.956 [2024-07-24 19:45:44.391325] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:16.923 [2024-07-24 19:45:45.391360] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:16.923 [2024-07-24 19:45:45.391555] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:16.923 killing process with pid 64338 00:07:16.923 Testing SSL server path 00:07:16.923 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:07:17.181 [2024-07-24 19:45:45.602297] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:17.181 [2024-07-24 19:45:45.602402] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64433 ] 00:07:17.181 [2024-07-24 19:45:45.746813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.440 [2024-07-24 19:45:45.938575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.440 [2024-07-24 19:45:45.938692] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:17.441 [2024-07-24 19:45:45.938808] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(ssl) 00:07:17.700 [2024-07-24 19:45:46.123462] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:17.700 [2024-07-24 19:45:46.123576] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64445 ] 00:07:17.700 [2024-07-24 19:45:46.267534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.958 [2024-07-24 19:45:46.398055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.958 [2024-07-24 19:45:46.398378] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:17.958 [2024-07-24 19:45:46.398542] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:17.958 [2024-07-24 19:45:46.401786] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 4489[2024-07-24 19:45:46.401788] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.2) to (10.0.0.1, 3260) 00:07:17.958 0.1, 44892) 00:07:17.958 [2024-07-24 19:45:46.403561] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:18.892 [2024-07-24 19:45:47.403775] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:18.892 [2024-07-24 19:45:47.404264] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:18.892 [2024-07-24 19:45:47.404461] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:19.156 [2024-07-24 19:45:47.563857] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:19.156 [2024-07-24 19:45:47.563990] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64462 ] 00:07:19.156 [2024-07-24 19:45:47.703227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.414 [2024-07-24 19:45:47.885137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.414 [2024-07-24 19:45:47.885574] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:19.414 [2024-07-24 19:45:47.885726] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:19.414 [2024-07-24 19:45:47.887481] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 44902) to (10.0.0.1, 3260) 00:07:19.414 [2024-07-24 19:45:47.889083] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 44902) 00:07:19.414 [2024-07-24 19:45:47.890568] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:20.348 [2024-07-24 19:45:48.890792] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:20.349 [2024-07-24 19:45:48.891276] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:20.349 [2024-07-24 19:45:48.891392] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:20.606 [2024-07-24 19:45:49.049972] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:20.607 [2024-07-24 19:45:49.050074] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64489 ] 00:07:20.607 [2024-07-24 19:45:49.191889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.864 [2024-07-24 19:45:49.347559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.864 [2024-07-24 19:45:49.347912] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:20.864 [2024-07-24 19:45:49.348045] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:20.864 [2024-07-24 19:45:49.349065] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 44908) to (10.0.0.1, 3260) 00:07:20.864 [2024-07-24 19:45:49.350357] posix.c: 755:posix_sock_create_ssl_context: *ERROR*: Incorrect TLS version provided: 7 00:07:20.864 [2024-07-24 19:45:49.350525] posix.c:1033:posix_sock_create: *ERROR*: posix_sock_create_ssl_context() failed, errno = 2 00:07:20.865 [2024-07-24 19:45:49.350641] hello_sock.c: 309:hello_sock_connect: *ERROR*: connect error(2): No such file or directory 00:07:20.865 [2024-07-24 19:45:49.350680] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.865 [2024-07-24 19:45:49.350693] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:20.865 [2024-07-24 19:45:49.350854] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:07:20.865 [2024-07-24 19:45:49.350947] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:20.865 [2024-07-24 19:45:49.504731] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:20.865 [2024-07-24 19:45:49.504839] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64493 ] 00:07:21.123 [2024-07-24 19:45:49.648150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.381 [2024-07-24 19:45:49.823818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.382 [2024-07-24 19:45:49.824090] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:21.382 [2024-07-24 19:45:49.824478] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:21.382 [2024-07-24 19:45:49.825440] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 44910) to (10.0.0.1, 3260) 00:07:21.382 [2024-07-24 19:45:49.827169] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 44910) 00:07:21.382 [2024-07-24 19:45:49.828406] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:22.315 [2024-07-24 19:45:50.828632] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:22.315 [2024-07-24 19:45:50.828957] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:22.315 [2024-07-24 19:45:50.829074] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:22.572 SSL_connect:before SSL initialization 00:07:22.572 SSL_connect:SSLv3/TLS write client hello 00:07:22.572 [2024-07-24 19:45:51.016157] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 43036) to (10.0.0.1, 3260) 00:07:22.572 SSL_connect:SSLv3/TLS write client hello 00:07:22.572 SSL_connect:SSLv3/TLS read server hello 00:07:22.572 Can't use SSL_get_servername 00:07:22.572 SSL_connect:TLSv1.3 read encrypted extensions 00:07:22.572 SSL_connect:SSLv3/TLS read finished 00:07:22.572 SSL_connect:SSLv3/TLS write change cipher spec 00:07:22.572 SSL_connect:SSLv3/TLS write finished 00:07:22.572 SSL_connect:SSL negotiation finished successfully 00:07:22.572 SSL_connect:SSL negotiation finished successfully 00:07:22.572 SSL_connect:SSLv3/TLS read server session ticket 00:07:24.472 DONE 00:07:24.472 [2024-07-24 19:45:52.965683] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:24.472 SSL3 alert write:warning:close notify 00:07:24.472 [2024-07-24 19:45:53.001879] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:24.472 [2024-07-24 19:45:53.001999] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64544 ] 00:07:24.730 [2024-07-24 19:45:53.148139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.730 [2024-07-24 19:45:53.323189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.730 [2024-07-24 19:45:53.323723] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:24.730 [2024-07-24 19:45:53.324088] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:24.730 [2024-07-24 19:45:53.325170] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 54184) to (10.0.0.1, 3260) 00:07:24.730 [2024-07-24 19:45:53.328538] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 54184) 00:07:24.730 [2024-07-24 19:45:53.329508] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:24.730 [2024-07-24 19:45:53.329512] hello_sock.c: 240:hello_sock_writev_poll: *ERROR*: Write to socket failed. Closing connection... 00:07:24.730 [2024-07-24 19:45:53.329883] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:07:26.101 [2024-07-24 19:45:54.329870] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:26.101 [2024-07-24 19:45:54.330404] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.101 [2024-07-24 19:45:54.330590] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:07:26.101 [2024-07-24 19:45:54.330695] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:26.101 [2024-07-24 19:45:54.490909] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:26.101 [2024-07-24 19:45:54.491033] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64563 ] 00:07:26.101 [2024-07-24 19:45:54.630626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.359 [2024-07-24 19:45:54.783929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.359 [2024-07-24 19:45:54.784243] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:26.359 [2024-07-24 19:45:54.784376] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:26.359 [2024-07-24 19:45:54.785436] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 54192) to (10.0.0.1, 3260) 00:07:26.359 [2024-07-24 19:45:54.786641] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 54192) 00:07:26.359 [2024-07-24 19:45:54.787272] posix.c: 586:posix_sock_psk_find_session_server_cb: *ERROR*: Unknown Client's PSK ID 00:07:26.359 [2024-07-24 19:45:54.787344] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:26.359 [2024-07-24 19:45:54.787439] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:07:27.293 [2024-07-24 19:45:55.787442] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:27.293 [2024-07-24 19:45:55.787968] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.293 [2024-07-24 19:45:55.788066] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:07:27.293 [2024-07-24 19:45:55.788235] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:27.293 killing process with pid 64433 00:07:28.668 [2024-07-24 19:45:56.938529] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:28.668 [2024-07-24 19:45:56.938786] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:28.668 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:07:28.668 [2024-07-24 19:45:57.147619] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:28.668 [2024-07-24 19:45:57.147725] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64614 ] 00:07:28.668 [2024-07-24 19:45:57.289627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.926 [2024-07-24 19:45:57.469008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.926 [2024-07-24 19:45:57.469130] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:28.926 [2024-07-24 19:45:57.469247] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(posix) 00:07:29.184 [2024-07-24 19:45:57.647796] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 47114) to (10.0.0.1, 3260) 00:07:29.184 [2024-07-24 19:45:57.647989] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:29.184 killing process with pid 64614 00:07:30.118 [2024-07-24 19:45:58.684806] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:30.118 [2024-07-24 19:45:58.685080] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:30.376 00:07:30.376 real 0m18.273s 00:07:30.376 user 0m21.106s 00:07:30.376 sys 0m3.365s 00:07:30.376 19:45:58 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.376 19:45:58 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:07:30.376 ************************************ 00:07:30.376 END TEST iscsi_tgt_sock 00:07:30.376 ************************************ 00:07:30.376 19:45:58 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@26 -- # [[ -d /usr/local/calsoft ]] 00:07:30.376 19:45:58 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@27 -- # run_test iscsi_tgt_calsoft /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:07:30.377 19:45:58 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.377 19:45:58 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.377 19:45:58 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:07:30.377 ************************************ 00:07:30.377 START TEST iscsi_tgt_calsoft 00:07:30.377 ************************************ 00:07:30.377 19:45:58 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:07:30.377 * Looking for test storage... 00:07:30.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@15 -- # MALLOC_BDEV_SIZE=64 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@16 -- # MALLOC_BLOCK_SIZE=512 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@18 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@19 -- # calsoft_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@22 -- # mkdir -p /usr/local/etc 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@23 -- # cp /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/its.conf /usr/local/etc/ 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@26 -- # echo IP=10.0.0.1 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@28 -- # timing_enter start_iscsi_tgt 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@30 -- # iscsitestinit 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@33 -- # pid=64706 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@32 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x1 --wait-for-rpc 00:07:30.377 Process pid: 64706 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@34 -- # echo 'Process pid: 64706' 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@36 -- # trap 'killprocess $pid; delete_tmp_conf_files; iscsitestfini; exit 1 ' SIGINT SIGTERM EXIT 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@38 -- # waitforlisten 64706 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@831 -- # '[' -z 64706 ']' 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.377 19:45:59 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:07:30.636 [2024-07-24 19:45:59.077632] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:30.636 [2024-07-24 19:45:59.077738] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64706 ] 00:07:30.636 [2024-07-24 19:45:59.223072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.894 [2024-07-24 19:45:59.391226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.840 19:46:00 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.840 19:46:00 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@864 -- # return 0 00:07:31.840 19:46:00 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:07:31.840 19:46:00 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:07:32.099 [2024-07-24 19:46:00.729274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.357 iscsi_tgt is listening. Running tests... 00:07:32.357 19:46:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@41 -- # echo 'iscsi_tgt is listening. Running tests...' 00:07:32.357 19:46:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@43 -- # timing_exit start_iscsi_tgt 00:07:32.357 19:46:01 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.357 19:46:01 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:07:32.615 19:46:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_auth_group 1 -c 'user:root secret:tester' 00:07:32.875 19:46:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_discovery_auth -g 1 00:07:33.133 19:46:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:07:33.392 19:46:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:07:33.650 19:46:02 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create -b MyBdev 64 512 00:07:34.216 MyBdev 00:07:34.216 19:46:02 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -g 1 00:07:34.474 19:46:03 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@55 -- # sleep 1 00:07:35.407 19:46:04 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@57 -- # '[' '' ']' 00:07:35.407 19:46:04 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py /home/vagrant/spdk_repo/spdk/../output 00:07:35.974 [2024-07-24 19:46:04.392785] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:35.974 [2024-07-24 19:46:04.392922] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:35.974 [2024-07-24 19:46:04.430703] iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:35.974 [2024-07-24 19:46:04.430761] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:07:35.974 [2024-07-24 19:46:04.430774] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:07:35.974 [2024-07-24 19:46:04.467308] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:35.974 [2024-07-24 19:46:04.485586] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:07:35.974 [2024-07-24 19:46:04.523127] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:07:35.974 [2024-07-24 19:46:04.540292] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:35.974 [2024-07-24 19:46:04.558600] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:35.974 [2024-07-24 19:46:04.593287] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:07:36.232 [2024-07-24 19:46:04.661734] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(3) error ExpCmdSN=4 00:07:36.232 [2024-07-24 19:46:04.661887] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:36.232 [2024-07-24 19:46:04.680167] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:36.232 [2024-07-24 19:46:04.698177] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:36.232 [2024-07-24 19:46:04.698296] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:36.232 [2024-07-24 19:46:04.715912] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:07:36.232 [2024-07-24 19:46:04.734902] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:36.232 [2024-07-24 19:46:04.735014] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:36.232 [2024-07-24 19:46:04.755040] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:36.232 [2024-07-24 19:46:04.755166] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:36.232 [2024-07-24 19:46:04.769585] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:36.232 [2024-07-24 19:46:04.789699] iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 2745410467, and the dataout task tag is 2728567458 00:07:36.232 [2024-07-24 19:46:04.789827] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:36.232 [2024-07-24 19:46:04.789906] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:36.232 [2024-07-24 19:46:04.789969] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:36.232 [2024-07-24 19:46:04.869813] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:36.232 [2024-07-24 19:46:04.870184] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:36.232 [2024-07-24 19:46:04.891019] iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:36.232 [2024-07-24 19:46:04.891315] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:36.529 [2024-07-24 19:46:04.906883] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:07:36.529 PDU 00:07:36.529 00000000 01 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:07:36.529 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:07:36.529 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:07:36.529 [2024-07-24 19:46:04.907100] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:07:36.529 [2024-07-24 19:46:04.924185] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(2) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:07:36.529 [2024-07-24 19:46:04.924260] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:36.529 [2024-07-24 19:46:04.924308] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:36.529 [2024-07-24 19:46:04.974402] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:36.529 [2024-07-24 19:46:04.974527] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:36.529 [2024-07-24 19:46:05.004119] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:36.529 [2024-07-24 19:46:05.070517] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:36.529 [2024-07-24 19:46:05.070633] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:36.529 [2024-07-24 19:46:05.105775] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:36.529 [2024-07-24 19:46:05.156068] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:36.529 [2024-07-24 19:46:05.156180] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:36.788 [2024-07-24 19:46:05.204220] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:07:36.788 [2024-07-24 19:46:05.240592] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:36.788 [2024-07-24 19:46:05.240737] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:36.788 [2024-07-24 19:46:05.256457] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:36.788 [2024-07-24 19:46:05.310691] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:36.788 [2024-07-24 19:46:05.329225] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:36.788 [2024-07-24 19:46:05.349170] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:36.788 [2024-07-24 19:46:05.382474] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:07:36.788 [2024-07-24 19:46:05.434658] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:36.788 [2024-07-24 19:46:05.434779] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:36.788 [2024-07-24 19:46:05.449529] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:37.046 [2024-07-24 19:46:05.464413] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:37.046 [2024-07-24 19:46:05.483741] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:37.046 [2024-07-24 19:46:05.483858] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:37.046 [2024-07-24 19:46:05.517500] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:07:37.046 PDU 00:07:37.046 00000000 00 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:07:37.046 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:07:37.046 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:07:37.046 [2024-07-24 19:46:05.517564] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:07:37.046 [2024-07-24 19:46:05.535422] param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 276 00:07:37.046 [2024-07-24 19:46:05.535465] iscsi.c:1303:iscsi_op_login_store_incoming_params: *ERROR*: iscsi_parse_params() failed 00:07:37.046 [2024-07-24 19:46:05.553309] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:37.046 [2024-07-24 19:46:05.604728] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(341) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:07:37.046 [2024-07-24 19:46:05.604834] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(8) ignore (ExpCmdSN=9, MaxCmdSN=71) 00:07:37.046 [2024-07-24 19:46:05.605215] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:07:37.046 [2024-07-24 19:46:05.627069] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:37.046 [2024-07-24 19:46:05.645178] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:37.046 [2024-07-24 19:46:05.645274] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:37.305 [2024-07-24 19:46:05.722244] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:39.204 [2024-07-24 19:46:07.681821] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.204 [2024-07-24 19:46:07.698274] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:39.204 [2024-07-24 19:46:07.731618] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:39.204 [2024-07-24 19:46:07.731738] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.204 [2024-07-24 19:46:07.749114] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:39.204 [2024-07-24 19:46:07.749221] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:07:39.204 [2024-07-24 19:46:07.749356] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:39.204 [2024-07-24 19:46:07.832825] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:39.204 [2024-07-24 19:46:07.832965] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.462 [2024-07-24 19:46:07.902125] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:39.462 [2024-07-24 19:46:07.902268] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.462 [2024-07-24 19:46:07.920413] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:39.462 [2024-07-24 19:46:07.920539] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.462 [2024-07-24 19:46:07.938311] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:39.462 [2024-07-24 19:46:07.972306] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:39.462 [2024-07-24 19:46:07.972412] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.462 [2024-07-24 19:46:07.990347] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:39.462 [2024-07-24 19:46:08.008550] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:39.462 [2024-07-24 19:46:08.008653] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.462 [2024-07-24 19:46:08.049843] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:39.462 [2024-07-24 19:46:08.050025] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.462 [2024-07-24 19:46:08.068394] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:39.462 [2024-07-24 19:46:08.068529] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.721 [2024-07-24 19:46:08.137731] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:39.721 [2024-07-24 19:46:08.205787] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:39.721 [2024-07-24 19:46:08.248863] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:39.721 [2024-07-24 19:46:08.249262] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.721 [2024-07-24 19:46:08.299275] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=ffffffff 00:07:39.721 [2024-07-24 19:46:08.336287] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:07:39.721 [2024-07-24 19:46:08.352803] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.721 [2024-07-24 19:46:08.385480] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:39.721 [2024-07-24 19:46:08.385814] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.980 [2024-07-24 19:46:08.449231] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:39.980 [2024-07-24 19:46:08.469536] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:07:39.980 [2024-07-24 19:46:08.469943] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:39.980 [2024-07-24 19:46:08.470161] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:07:39.980 [2024-07-24 19:46:08.470315] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=6, MaxCmdSN=67) 00:07:39.980 [2024-07-24 19:46:08.470886] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:07:39.980 [2024-07-24 19:46:08.492768] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:39.980 [2024-07-24 19:46:08.493102] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.980 [2024-07-24 19:46:08.555681] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:39.980 [2024-07-24 19:46:08.556075] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:39.980 [2024-07-24 19:46:08.588491] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:39.980 [2024-07-24 19:46:08.626500] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:39.980 [2024-07-24 19:46:08.644527] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:39.980 [2024-07-24 19:46:08.644646] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:40.238 [2024-07-24 19:46:08.661195] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:40.238 [2024-07-24 19:46:08.680992] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:40.238 [2024-07-24 19:46:08.681094] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:40.238 [2024-07-24 19:46:08.721760] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:40.238 [2024-07-24 19:46:08.739274] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:40.238 [2024-07-24 19:46:08.739431] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:40.238 [2024-07-24 19:46:08.796013] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:40.238 [2024-07-24 19:46:08.815505] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:40.238 [2024-07-24 19:46:08.815625] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:40.238 [2024-07-24 19:46:08.833762] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:40.238 [2024-07-24 19:46:08.833822] iscsi.c:3961:iscsi_handle_recovery_datain: *ERROR*: Initiator requests BegRun: 0x00000000, RunLength:0x00001000 greater than maximum DataSN: 0x00000004. 00:07:40.238 [2024-07-24 19:46:08.833835] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=10) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:07:40.238 [2024-07-24 19:46:08.833845] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:07:40.238 [2024-07-24 19:46:08.852887] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:07:40.238 [2024-07-24 19:46:08.889746] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:40.498 [2024-07-24 19:46:08.909172] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:40.498 [2024-07-24 19:46:08.909317] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:40.498 [2024-07-24 19:46:08.944308] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:40.498 [2024-07-24 19:46:08.944435] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:40.498 [2024-07-24 19:46:08.963189] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:40.498 [2024-07-24 19:46:08.978205] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:07:40.498 [2024-07-24 19:46:08.998378] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:40.498 [2024-07-24 19:46:08.998486] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:40.498 [2024-07-24 19:46:09.019357] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:07:40.498 [2024-07-24 19:46:09.019469] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:07:40.498 [2024-07-24 19:46:09.056210] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:40.498 [2024-07-24 19:46:09.090125] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:07:40.498 [2024-07-24 19:46:09.108683] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:40.498 [2024-07-24 19:46:09.108805] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:40.498 [2024-07-24 19:46:09.161336] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:40.498 [2024-07-24 19:46:09.161444] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:40.757 [2024-07-24 19:46:09.179462] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:40.757 [2024-07-24 19:46:09.235837] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:40.757 [2024-07-24 19:46:09.307577] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key ImmediateDataa 00:07:40.757 [2024-07-24 19:46:09.372744] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:40.757 [2024-07-24 19:46:09.415030] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:07:42.132 [2024-07-24 19:46:10.455546] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:43.066 [2024-07-24 19:46:11.437291] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=6, MaxCmdSN=68) 00:07:43.066 [2024-07-24 19:46:11.437692] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=7 00:07:43.066 [2024-07-24 19:46:11.455770] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=5, MaxCmdSN=68) 00:07:44.002 [2024-07-24 19:46:12.456026] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=6, MaxCmdSN=69) 00:07:44.002 [2024-07-24 19:46:12.456275] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=7, MaxCmdSN=70) 00:07:44.002 [2024-07-24 19:46:12.456295] iscsi.c:4028:iscsi_handle_status_snack: *ERROR*: Unable to find StatSN: 0x00000007. For a StatusSNACK, assuming this is a proactive SNACK for an untransmitted StatSN, ignoring. 00:07:44.002 [2024-07-24 19:46:12.456314] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=8 00:07:56.208 [2024-07-24 19:46:24.507512] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:07:56.208 [2024-07-24 19:46:24.529095] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:07:56.208 [2024-07-24 19:46:24.545943] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:07:56.208 [2024-07-24 19:46:24.548394] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:07:56.208 [2024-07-24 19:46:24.570163] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:07:56.208 [2024-07-24 19:46:24.586201] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:07:56.208 [2024-07-24 19:46:24.609358] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:07:56.208 [2024-07-24 19:46:24.650315] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:07:56.208 [2024-07-24 19:46:24.653259] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=64 00:07:56.208 [2024-07-24 19:46:24.672410] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1107296256) error ExpCmdSN=66 00:07:56.208 [2024-07-24 19:46:24.694315] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:07:56.208 [2024-07-24 19:46:24.713333] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=67 00:07:56.209 Skipping tc_ffp_15_2. It is known to fail. 00:07:56.209 Skipping tc_ffp_29_2. It is known to fail. 00:07:56.209 Skipping tc_ffp_29_3. It is known to fail. 00:07:56.209 Skipping tc_ffp_29_4. It is known to fail. 00:07:56.209 Skipping tc_err_1_1. It is known to fail. 00:07:56.209 Skipping tc_err_1_2. It is known to fail. 00:07:56.209 Skipping tc_err_2_8. It is known to fail. 00:07:56.209 Skipping tc_err_3_1. It is known to fail. 00:07:56.209 Skipping tc_err_3_2. It is known to fail. 00:07:56.209 Skipping tc_err_3_3. It is known to fail. 00:07:56.209 Skipping tc_err_3_4. It is known to fail. 00:07:56.209 Skipping tc_err_5_1. It is known to fail. 00:07:56.209 Skipping tc_login_3_1. It is known to fail. 00:07:56.209 Skipping tc_login_11_2. It is known to fail. 00:07:56.209 Skipping tc_login_11_4. It is known to fail. 00:07:56.209 Skipping tc_login_2_2. It is known to fail. 00:07:56.209 Skipping tc_login_29_1. It is known to fail. 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@62 -- # failed=0 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:56.209 Cleaning up iSCSI connection 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@67 -- # iscsicleanup 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:07:56.209 iscsiadm: No matching sessions found 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # true 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:07:56.209 iscsiadm: No records found 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@984 -- # true 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@985 -- # rm -rf 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@68 -- # killprocess 64706 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@950 -- # '[' -z 64706 ']' 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # kill -0 64706 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@955 -- # uname 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64706 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.209 killing process with pid 64706 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64706' 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@969 -- # kill 64706 00:07:56.209 19:46:24 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@974 -- # wait 64706 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@69 -- # delete_tmp_conf_files 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@12 -- # rm -f /usr/local/etc/its.conf 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@70 -- # iscsitestfini 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@71 -- # exit 0 00:07:57.146 00:07:57.146 real 0m26.547s 00:07:57.146 user 0m40.433s 00:07:57.146 sys 0m5.436s 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:07:57.146 ************************************ 00:07:57.146 END TEST iscsi_tgt_calsoft 00:07:57.146 ************************************ 00:07:57.146 19:46:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@31 -- # run_test iscsi_tgt_filesystem /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:07:57.146 19:46:25 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.146 19:46:25 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.146 19:46:25 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:07:57.146 ************************************ 00:07:57.146 START TEST iscsi_tgt_filesystem 00:07:57.146 ************************************ 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:07:57.146 * Looking for test storage... 00:07:57.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/setup/common.sh 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:57.146 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=y 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:57.147 #define SPDK_CONFIG_H 00:07:57.147 #define SPDK_CONFIG_APPS 1 00:07:57.147 #define SPDK_CONFIG_ARCH native 00:07:57.147 #undef SPDK_CONFIG_ASAN 00:07:57.147 #undef SPDK_CONFIG_AVAHI 00:07:57.147 #undef SPDK_CONFIG_CET 00:07:57.147 #define SPDK_CONFIG_COVERAGE 1 00:07:57.147 #define SPDK_CONFIG_CROSS_PREFIX 00:07:57.147 #undef SPDK_CONFIG_CRYPTO 00:07:57.147 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:57.147 #undef SPDK_CONFIG_CUSTOMOCF 00:07:57.147 #undef SPDK_CONFIG_DAOS 00:07:57.147 #define SPDK_CONFIG_DAOS_DIR 00:07:57.147 #define SPDK_CONFIG_DEBUG 1 00:07:57.147 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:57.147 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:57.147 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:57.147 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:57.147 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:57.147 #undef SPDK_CONFIG_DPDK_UADK 00:07:57.147 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:57.147 #define SPDK_CONFIG_EXAMPLES 1 00:07:57.147 #undef SPDK_CONFIG_FC 00:07:57.147 #define SPDK_CONFIG_FC_PATH 00:07:57.147 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:57.147 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:57.147 #undef SPDK_CONFIG_FUSE 00:07:57.147 #undef SPDK_CONFIG_FUZZER 00:07:57.147 #define SPDK_CONFIG_FUZZER_LIB 00:07:57.147 #undef SPDK_CONFIG_GOLANG 00:07:57.147 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:57.147 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:57.147 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:57.147 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:57.147 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:57.147 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:57.147 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:57.147 #define SPDK_CONFIG_IDXD 1 00:07:57.147 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:57.147 #undef SPDK_CONFIG_IPSEC_MB 00:07:57.147 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:57.147 #define SPDK_CONFIG_ISAL 1 00:07:57.147 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:57.147 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:57.147 #define SPDK_CONFIG_LIBDIR 00:07:57.147 #undef SPDK_CONFIG_LTO 00:07:57.147 #define SPDK_CONFIG_MAX_LCORES 128 00:07:57.147 #define SPDK_CONFIG_NVME_CUSE 1 00:07:57.147 #undef SPDK_CONFIG_OCF 00:07:57.147 #define SPDK_CONFIG_OCF_PATH 00:07:57.147 #define SPDK_CONFIG_OPENSSL_PATH 00:07:57.147 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:57.147 #define SPDK_CONFIG_PGO_DIR 00:07:57.147 #undef SPDK_CONFIG_PGO_USE 00:07:57.147 #define SPDK_CONFIG_PREFIX /usr/local 00:07:57.147 #undef SPDK_CONFIG_RAID5F 00:07:57.147 #undef SPDK_CONFIG_RBD 00:07:57.147 #define SPDK_CONFIG_RDMA 1 00:07:57.147 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:57.147 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:57.147 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:57.147 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:57.147 #define SPDK_CONFIG_SHARED 1 00:07:57.147 #undef SPDK_CONFIG_SMA 00:07:57.147 #define SPDK_CONFIG_TESTS 1 00:07:57.147 #undef SPDK_CONFIG_TSAN 00:07:57.147 #define SPDK_CONFIG_UBLK 1 00:07:57.147 #define SPDK_CONFIG_UBSAN 1 00:07:57.147 #undef SPDK_CONFIG_UNIT_TESTS 00:07:57.147 #define SPDK_CONFIG_URING 1 00:07:57.147 #define SPDK_CONFIG_URING_PATH 00:07:57.147 #define SPDK_CONFIG_URING_ZNS 1 00:07:57.147 #undef SPDK_CONFIG_USDT 00:07:57.147 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:57.147 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:57.147 #undef SPDK_CONFIG_VFIO_USER 00:07:57.147 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:57.147 #define SPDK_CONFIG_VHOST 1 00:07:57.147 #define SPDK_CONFIG_VIRTIO 1 00:07:57.147 #undef SPDK_CONFIG_VTUNE 00:07:57.147 #define SPDK_CONFIG_VTUNE_DIR 00:07:57.147 #define SPDK_CONFIG_WERROR 1 00:07:57.147 #define SPDK_CONFIG_WPDK_DIR 00:07:57.147 #undef SPDK_CONFIG_XNVME 00:07:57.147 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.147 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # uname -s 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@70 -- # : 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@76 -- # : 1 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@86 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@92 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@124 -- # : 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@138 -- # : 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:57.148 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@144 -- # : 1 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@154 -- # : 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@166 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@169 -- # : 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@173 -- # : 0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@202 -- # cat 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j10 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@320 -- # [[ -z 65424 ]] 00:07:57.149 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@320 -- # kill -0 65424 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.aPNNyF 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem /tmp/spdk.aPNNyF/tests/filesystem /tmp/spdk.aPNNyF 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@329 -- # df -T 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=devtmpfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=4194304 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=4194304 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6264516608 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267891712 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=3375104 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=2496167936 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=2507157504 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10989568 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13801148416 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5227130880 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13801148416 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5227130880 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6267748352 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267891712 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=143360 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda2 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=843546624 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1012768768 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=100016128 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda3 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=vfat 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=92499968 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=104607744 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=12107776 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=1253572608 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1253576704 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/iscsi-uring-vg-autotest_2/fedora38-libvirt/output 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=fuse.sshfs 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=89879826432 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=105088212992 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9822953472 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:07:57.150 * Looking for test storage... 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # df /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # mount=/home 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@376 -- # target_space=13801148416 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == tmpfs ]] 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == ramfs ]] 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@382 -- # [[ /home == / ]] 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:07:57.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@391 -- # return 0 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:57.150 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@11 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@13 -- # iscsitestinit 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@29 -- # timing_enter start_iscsi_tgt 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@32 -- # pid=65467 00:07:57.151 Process pid: 65467 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@33 -- # echo 'Process pid: 65467' 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@35 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@37 -- # waitforlisten 65467 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@831 -- # '[' -z 65467 ']' 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.151 19:46:25 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.411 [2024-07-24 19:46:25.839067] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:57.411 [2024-07-24 19:46:25.839182] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65467 ] 00:07:57.411 [2024-07-24 19:46:25.983707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.667 [2024-07-24 19:46:26.152857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.667 [2024-07-24 19:46:26.152995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.667 [2024-07-24 19:46:26.153208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.667 [2024-07-24 19:46:26.153210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.233 19:46:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.233 19:46:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@864 -- # return 0 00:07:58.233 19:46:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@38 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:07:58.233 19:46:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.233 19:46:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.233 19:46:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.233 19:46:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@39 -- # rpc_cmd framework_start_init 00:07:58.233 19:46:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.233 19:46:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.491 [2024-07-24 19:46:26.980463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.748 iscsi_tgt is listening. Running tests... 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@40 -- # echo 'iscsi_tgt is listening. Running tests...' 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@42 -- # timing_exit start_iscsi_tgt 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # get_first_nvme_bdf 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # bdfs=() 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # local bdfs 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # local bdfs 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # bdf=0000:00:10.0 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@45 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@46 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@47 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:00:10.0 00:07:58.748 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.749 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.006 Nvme0n1 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # rpc_cmd bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # ls_guid=9748f764-219c-41cf-bbfb-4e84280519eb 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # get_lvs_free_mb 9748f764-219c-41cf-bbfb-4e84280519eb 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1364 -- # local lvs_uuid=9748f764-219c-41cf-bbfb-4e84280519eb 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1365 -- # local lvs_info 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1366 -- # local fc 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1367 -- # local cs 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_lvol_get_lvstores 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:07:59.006 { 00:07:59.006 "uuid": "9748f764-219c-41cf-bbfb-4e84280519eb", 00:07:59.006 "name": "lvs_0", 00:07:59.006 "base_bdev": "Nvme0n1", 00:07:59.006 "total_data_clusters": 1278, 00:07:59.006 "free_clusters": 1278, 00:07:59.006 "block_size": 4096, 00:07:59.006 "cluster_size": 4194304 00:07:59.006 } 00:07:59.006 ]' 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9748f764-219c-41cf-bbfb-4e84280519eb") .free_clusters' 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # fc=1278 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="9748f764-219c-41cf-bbfb-4e84280519eb") .cluster_size' 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # cs=4194304 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1373 -- # free_mb=5112 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1374 -- # echo 5112 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # free_mb=5112 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@52 -- # '[' 5112 -gt 2048 ']' 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@53 -- # rpc_cmd bdev_lvol_create -u 9748f764-219c-41cf-bbfb-4e84280519eb lbd_0 2048 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.006 1193b5da-57ff-4baa-9044-ec91fde0a8e9 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@61 -- # lvol_name=lvs_0/lbd_0 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@62 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias lvs_0/lbd_0:0 1:2 256 -d 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.006 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.007 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.007 19:46:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@63 -- # sleep 1 00:07:59.940 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@65 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:00.222 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@66 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:00.222 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:00.222 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@67 -- # waitforiscsidevices 1 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:00.222 [2024-07-24 19:46:28.718672] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # n=1 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@123 -- # return 0 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # get_bdev_size lvs_0/lbd_0 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1378 -- # local bdev_name=lvs_0/lbd_0 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1380 -- # local bs 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1381 -- # local nb 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b lvs_0/lbd_0 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:00.222 { 00:08:00.222 "name": "1193b5da-57ff-4baa-9044-ec91fde0a8e9", 00:08:00.222 "aliases": [ 00:08:00.222 "lvs_0/lbd_0" 00:08:00.222 ], 00:08:00.222 "product_name": "Logical Volume", 00:08:00.222 "block_size": 4096, 00:08:00.222 "num_blocks": 524288, 00:08:00.222 "uuid": "1193b5da-57ff-4baa-9044-ec91fde0a8e9", 00:08:00.222 "assigned_rate_limits": { 00:08:00.222 "rw_ios_per_sec": 0, 00:08:00.222 "rw_mbytes_per_sec": 0, 00:08:00.222 "r_mbytes_per_sec": 0, 00:08:00.222 "w_mbytes_per_sec": 0 00:08:00.222 }, 00:08:00.222 "claimed": false, 00:08:00.222 "zoned": false, 00:08:00.222 "supported_io_types": { 00:08:00.222 "read": true, 00:08:00.222 "write": true, 00:08:00.222 "unmap": true, 00:08:00.222 "flush": false, 00:08:00.222 "reset": true, 00:08:00.222 "nvme_admin": false, 00:08:00.222 "nvme_io": false, 00:08:00.222 "nvme_io_md": false, 00:08:00.222 "write_zeroes": true, 00:08:00.222 "zcopy": false, 00:08:00.222 "get_zone_info": false, 00:08:00.222 "zone_management": false, 00:08:00.222 "zone_append": false, 00:08:00.222 "compare": false, 00:08:00.222 "compare_and_write": false, 00:08:00.222 "abort": false, 00:08:00.222 "seek_hole": true, 00:08:00.222 "seek_data": true, 00:08:00.222 "copy": false, 00:08:00.222 "nvme_iov_md": false 00:08:00.222 }, 00:08:00.222 "driver_specific": { 00:08:00.222 "lvol": { 00:08:00.222 "lvol_store_uuid": "9748f764-219c-41cf-bbfb-4e84280519eb", 00:08:00.222 "base_bdev": "Nvme0n1", 00:08:00.222 "thin_provision": false, 00:08:00.222 "num_allocated_clusters": 512, 00:08:00.222 "snapshot": false, 00:08:00.222 "clone": false, 00:08:00.222 "esnap_clone": false 00:08:00.222 } 00:08:00.222 } 00:08:00.222 } 00:08:00.222 ]' 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # bs=4096 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # nb=524288 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1387 -- # bdev_size=2048 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1388 -- # echo 2048 00:08:00.222 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # lvol_size=2147483648 00:08:00.223 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@70 -- # trap 'iscsicleanup; remove_backends; umount /mnt/device; rm -rf /mnt/device; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.223 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@72 -- # mkdir -p /mnt/device 00:08:00.223 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # iscsiadm -m session -P 3 00:08:00.223 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # grep 'Attached scsi disk' 00:08:00.223 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # awk '{print $4}' 00:08:00.223 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # dev=sda 00:08:00.223 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@76 -- # waitforfile /dev/sda 00:08:00.223 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1265 -- # local i=0 00:08:00.223 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:08:00.223 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:08:00.223 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1276 -- # return 0 00:08:00.520 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # sec_size_to_bytes sda 00:08:00.521 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@76 -- # local dev=sda 00:08:00.521 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@78 -- # [[ -e /sys/block/sda ]] 00:08:00.521 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@80 -- # echo 2147483648 00:08:00.521 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # dev_size=2147483648 00:08:00.521 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@80 -- # (( lvol_size == dev_size )) 00:08:00.521 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@81 -- # parted -s /dev/sda mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:00.521 19:46:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@82 -- # sleep 1 00:08:00.521 [2024-07-24 19:46:28.904838] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@144 -- # run_test iscsi_tgt_filesystem_ext4 filesystem_test ext4 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.457 ************************************ 00:08:01.457 START TEST iscsi_tgt_filesystem_ext4 00:08:01.457 ************************************ 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1125 -- # filesystem_test ext4 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@89 -- # fstype=ext4 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@91 -- # make_filesystem ext4 /dev/sda1 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/sda1 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:08:01.457 19:46:29 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/sda1 00:08:01.457 mke2fs 1.46.5 (30-Dec-2021) 00:08:01.457 Discarding device blocks: 0/522240 done 00:08:01.457 Creating filesystem with 522240 4k blocks and 130560 inodes 00:08:01.457 Filesystem UUID: 59f6e6e5-986c-423c-a127-329bce278ca3 00:08:01.457 Superblock backups stored on blocks: 00:08:01.457 32768, 98304, 163840, 229376, 294912 00:08:01.457 00:08:01.457 Allocating group tables: 0/16 done 00:08:01.457 Writing inode tables: 0/16 done 00:08:01.716 Creating journal (8192 blocks): done 00:08:01.716 Writing superblocks and filesystem accounting information: 0/16 done 00:08:01.716 00:08:01.716 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:08:01.716 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:08:01.716 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@93 -- # '[' 0 -eq 1 ']' 00:08:01.716 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@119 -- # touch /mnt/device/aaa 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@120 -- # umount /mnt/device 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@122 -- # iscsiadm -m node --logout 00:08:01.717 Logging out of session [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:01.717 Logout of [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@123 -- # waitforiscsidevices 0 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=0 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:01.717 iscsiadm: No active sessions. 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # true 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=0 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@124 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:01.717 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:01.717 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@125 -- # waitforiscsidevices 1 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:01.717 [2024-07-24 19:46:30.340120] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=1 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@127 -- # iscsiadm -m session -P 3 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@127 -- # awk '{print $4}' 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@127 -- # grep 'Attached scsi disk' 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@127 -- # dev=sda 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@129 -- # waitforfile /dev/sda1 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1265 -- # local i=0 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1276 -- # return 0 00:08:01.717 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@130 -- # mount -o rw /dev/sda1 /mnt/device 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@132 -- # '[' -f /mnt/device/aaa ']' 00:08:01.976 File existed. 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@133 -- # echo 'File existed.' 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@139 -- # rm -rf /mnt/device/aaa 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@140 -- # umount /mnt/device 00:08:01.976 00:08:01.976 real 0m0.502s 00:08:01.976 user 0m0.043s 00:08:01.976 sys 0m0.095s 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:01.976 ************************************ 00:08:01.976 END TEST iscsi_tgt_filesystem_ext4 00:08:01.976 ************************************ 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@145 -- # run_test iscsi_tgt_filesystem_btrfs filesystem_test btrfs 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.976 ************************************ 00:08:01.976 START TEST iscsi_tgt_filesystem_btrfs 00:08:01.976 ************************************ 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1125 -- # filesystem_test btrfs 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@89 -- # fstype=btrfs 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@91 -- # make_filesystem btrfs /dev/sda1 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/sda1 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/sda1 00:08:01.976 btrfs-progs v6.6.2 00:08:01.976 See https://btrfs.readthedocs.io for more information. 00:08:01.976 00:08:01.976 Performing full device TRIM /dev/sda1 (1.99GiB) ... 00:08:01.976 NOTE: several default settings have changed in version 5.15, please make sure 00:08:01.976 this does not affect your deployments: 00:08:01.976 - DUP for metadata (-m dup) 00:08:01.976 - enabled no-holes (-O no-holes) 00:08:01.976 - enabled free-space-tree (-R free-space-tree) 00:08:01.976 00:08:01.976 Label: (null) 00:08:01.976 UUID: 9dd465bb-8c75-43e2-a87f-1a4cef7ccd60 00:08:01.976 Node size: 16384 00:08:01.976 Sector size: 4096 00:08:01.976 Filesystem size: 1.99GiB 00:08:01.976 Block group profiles: 00:08:01.976 Data: single 8.00MiB 00:08:01.976 Metadata: DUP 102.00MiB 00:08:01.976 System: DUP 8.00MiB 00:08:01.976 SSD detected: yes 00:08:01.976 Zoned device: no 00:08:01.976 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:01.976 Runtime features: free-space-tree 00:08:01.976 Checksum: crc32c 00:08:01.976 Number of devices: 1 00:08:01.976 Devices: 00:08:01.976 ID SIZE PATH 00:08:01.976 1 1.99GiB /dev/sda1 00:08:01.976 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:08:01.976 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:08:02.234 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@93 -- # '[' 0 -eq 1 ']' 00:08:02.234 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@119 -- # touch /mnt/device/aaa 00:08:02.234 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@120 -- # umount /mnt/device 00:08:02.234 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@122 -- # iscsiadm -m node --logout 00:08:02.234 Logging out of session [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:02.234 Logout of [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:02.234 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@123 -- # waitforiscsidevices 0 00:08:02.234 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:08:02.234 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:02.234 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:02.234 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:02.234 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:02.234 iscsiadm: No active sessions. 00:08:02.234 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # true 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=0 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@124 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:02.235 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:02.235 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@125 -- # waitforiscsidevices 1 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:02.235 [2024-07-24 19:46:30.790804] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=1 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@127 -- # iscsiadm -m session -P 3 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@127 -- # grep 'Attached scsi disk' 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@127 -- # awk '{print $4}' 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@127 -- # dev=sda 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@129 -- # waitforfile /dev/sda1 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1265 -- # local i=0 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1276 -- # return 0 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@130 -- # mount -o rw /dev/sda1 /mnt/device 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@132 -- # '[' -f /mnt/device/aaa ']' 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@133 -- # echo 'File existed.' 00:08:02.235 File existed. 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@139 -- # rm -rf /mnt/device/aaa 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@140 -- # umount /mnt/device 00:08:02.235 00:08:02.235 real 0m0.401s 00:08:02.235 user 0m0.049s 00:08:02.235 sys 0m0.128s 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.235 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:02.235 ************************************ 00:08:02.235 END TEST iscsi_tgt_filesystem_btrfs 00:08:02.235 ************************************ 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@146 -- # run_test iscsi_tgt_filesystem_xfs filesystem_test xfs 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.493 ************************************ 00:08:02.493 START TEST iscsi_tgt_filesystem_xfs 00:08:02.493 ************************************ 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1125 -- # filesystem_test xfs 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@89 -- # fstype=xfs 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@91 -- # make_filesystem xfs /dev/sda1 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/sda1 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:08:02.493 19:46:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/sda1 00:08:02.493 meta-data=/dev/sda1 isize=512 agcount=4, agsize=130560 blks 00:08:02.493 = sectsz=4096 attr=2, projid32bit=1 00:08:02.493 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:02.493 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:02.493 data = bsize=4096 blocks=522240, imaxpct=25 00:08:02.493 = sunit=0 swidth=0 blks 00:08:02.493 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:02.493 log =internal log bsize=4096 blocks=16384, version=2 00:08:02.493 = sectsz=4096 sunit=1 blks, lazy-count=1 00:08:02.493 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:03.060 Discarding blocks...Done. 00:08:03.060 19:46:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:08:03.060 19:46:31 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@93 -- # '[' 0 -eq 1 ']' 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@119 -- # touch /mnt/device/aaa 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@120 -- # umount /mnt/device 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@122 -- # iscsiadm -m node --logout 00:08:03.634 Logging out of session [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:03.634 Logout of [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@123 -- # waitforiscsidevices 0 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:03.634 iscsiadm: No active sessions. 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # true 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=0 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@124 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:03.634 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:03.634 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@125 -- # waitforiscsidevices 1 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:03.634 [2024-07-24 19:46:32.229341] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=1 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@127 -- # grep 'Attached scsi disk' 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@127 -- # awk '{print $4}' 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@127 -- # iscsiadm -m session -P 3 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@127 -- # dev=sda 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@129 -- # waitforfile /dev/sda1 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1265 -- # local i=0 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1276 -- # return 0 00:08:03.634 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@130 -- # mount -o rw /dev/sda1 /mnt/device 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@132 -- # '[' -f /mnt/device/aaa ']' 00:08:03.893 File existed. 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@133 -- # echo 'File existed.' 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@139 -- # rm -rf /mnt/device/aaa 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@140 -- # umount /mnt/device 00:08:03.893 00:08:03.893 real 0m1.445s 00:08:03.893 user 0m0.052s 00:08:03.893 sys 0m0.097s 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:03.893 ************************************ 00:08:03.893 END TEST iscsi_tgt_filesystem_xfs 00:08:03.893 ************************************ 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@148 -- # rm -rf /mnt/device 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@152 -- # iscsicleanup 00:08:03.893 Cleaning up iSCSI connection 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:08:03.893 Logging out of session [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:03.893 Logout of [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@985 -- # rm -rf 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@153 -- # remove_backends 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@17 -- # echo 'INFO: Removing lvol bdev' 00:08:03.893 INFO: Removing lvol bdev 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@18 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.893 [2024-07-24 19:46:32.501189] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (1193b5da-57ff-4baa-9044-ec91fde0a8e9) received event(SPDK_BDEV_EVENT_REMOVE) 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.893 INFO: Removing lvol stores 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@20 -- # echo 'INFO: Removing lvol stores' 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@21 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.893 INFO: Removing NVMe 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@23 -- # echo 'INFO: Removing NVMe' 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@24 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.893 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@26 -- # return 0 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@154 -- # killprocess 65467 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@950 -- # '[' -z 65467 ']' 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # kill -0 65467 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@955 -- # uname 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65467 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65467' 00:08:04.152 killing process with pid 65467 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@969 -- # kill 65467 00:08:04.152 19:46:32 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@974 -- # wait 65467 00:08:04.719 19:46:33 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@155 -- # iscsitestfini 00:08:04.719 19:46:33 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:08:04.719 00:08:04.719 real 0m7.700s 00:08:04.719 user 0m27.778s 00:08:04.719 sys 0m1.683s 00:08:04.719 19:46:33 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.719 19:46:33 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.719 ************************************ 00:08:04.719 END TEST iscsi_tgt_filesystem 00:08:04.719 ************************************ 00:08:04.719 19:46:33 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@32 -- # run_test chap_during_discovery /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:08:04.719 19:46:33 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.719 19:46:33 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.719 19:46:33 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:04.719 ************************************ 00:08:04.719 START TEST chap_during_discovery 00:08:04.719 ************************************ 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:08:04.719 * Looking for test storage... 00:08:04.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:08:04.719 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@13 -- # USER=chapo 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@14 -- # MUSER=mchapo 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@15 -- # PASS=123456789123 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@16 -- # MPASS=321978654321 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@19 -- # iscsitestinit 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@21 -- # set_up_iscsi_target 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@142 -- # pid=65906 00:08:04.720 iSCSI target launched. pid: 65906 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 65906' 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@145 -- # waitforlisten 65906 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@831 -- # '[' -z 65906 ']' 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:08:04.720 19:46:33 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.977 [2024-07-24 19:46:33.426340] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:04.977 [2024-07-24 19:46:33.426417] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65906 ] 00:08:05.235 [2024-07-24 19:46:33.676020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.235 [2024-07-24 19:46:33.769701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.802 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.802 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@864 -- # return 0 00:08:05.802 19:46:34 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:08:05.802 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.802 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.802 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.802 19:46:34 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:08:05.802 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.802 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.802 [2024-07-24 19:46:34.449920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.061 iscsi_tgt is listening. Running tests... 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.061 Malloc0 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.061 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.062 19:46:34 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.062 19:46:34 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@155 -- # sleep 1 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:06.996 configuring target for bideerctional authentication 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@24 -- # echo 'configuring target for bideerctional authentication' 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.996 19:46:35 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@95 -- # '[' 0 -eq 1 ']' 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 1 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.255 executing discovery without adding credential to initiator - we expect failure 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@27 -- # rc=0 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:07.255 iscsiadm: Login failed to authenticate with target 00:08:07.255 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:08:07.255 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # rc=24 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@29 -- # '[' 24 -eq 0 ']' 00:08:07.255 configuring initiator for bideerctional authentication 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@35 -- # echo 'configuring initiator for bideerctional authentication' 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@36 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:08:07.255 iscsiadm: No matching sessions found 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:08:07.255 iscsiadm: No records found 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # true 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:08:07.255 19:46:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:08:10.539 19:46:38 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:08:10.539 19:46:38 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@116 -- # '[' 0 -eq 1 ']' 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@135 -- # restart_iscsid 00:08:11.474 19:46:39 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:08:14.820 19:46:42 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:08:14.820 19:46:42 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:08:15.385 executing discovery with adding credential to initiator 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@37 -- # echo 'executing discovery with adding credential to initiator' 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@38 -- # rc=0 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@39 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:15.385 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@40 -- # '[' 0 -ne 0 ']' 00:08:15.385 DONE 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@44 -- # echo DONE 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@45 -- # default_initiator_chap_credentials 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:08:15.385 iscsiadm: No matching sessions found 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:15.385 19:46:43 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:08:15.385 19:46:44 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:08:15.385 19:46:44 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:08:15.385 19:46:44 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:08:15.385 19:46:44 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:08:15.385 19:46:44 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:08:18.671 19:46:47 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:08:18.671 19:46:47 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@49 -- # killprocess 65906 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@950 -- # '[' -z 65906 ']' 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # kill -0 65906 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@955 -- # uname 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65906 00:08:19.606 killing process with pid 65906 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65906' 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@969 -- # kill 65906 00:08:19.606 19:46:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@974 -- # wait 65906 00:08:20.172 19:46:48 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@51 -- # iscsitestfini 00:08:20.172 19:46:48 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:08:20.172 00:08:20.172 real 0m15.490s 00:08:20.172 user 0m15.570s 00:08:20.172 sys 0m0.659s 00:08:20.172 19:46:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.172 ************************************ 00:08:20.172 END TEST chap_during_discovery 00:08:20.172 ************************************ 00:08:20.172 19:46:48 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.172 19:46:48 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@33 -- # run_test chap_mutual_auth /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:08:20.172 19:46:48 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.172 19:46:48 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.172 19:46:48 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:20.430 ************************************ 00:08:20.430 START TEST chap_mutual_auth 00:08:20.430 ************************************ 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:08:20.430 * Looking for test storage... 00:08:20.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@13 -- # USER=chapo 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@14 -- # MUSER=mchapo 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@15 -- # PASS=123456789123 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@16 -- # MPASS=321978654321 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@19 -- # iscsitestinit 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@21 -- # set_up_iscsi_target 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@142 -- # pid=66179 00:08:20.430 iSCSI target launched. pid: 66179 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 66179' 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@145 -- # waitforlisten 66179 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@831 -- # '[' -z 66179 ']' 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:20.430 19:46:48 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:08:20.430 [2024-07-24 19:46:49.017215] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:20.430 [2024-07-24 19:46:49.017315] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66179 ] 00:08:20.688 [2024-07-24 19:46:49.288340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.946 [2024-07-24 19:46:49.409113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.524 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.524 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@864 -- # return 0 00:08:21.524 19:46:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:08:21.524 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.524 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:21.524 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.524 19:46:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:08:21.524 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.524 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:21.524 [2024-07-24 19:46:50.061399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.782 iscsi_tgt is listening. Running tests... 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:21.782 Malloc0 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.782 19:46:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@155 -- # sleep 1 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:22.718 configuring target for authentication 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@24 -- # echo 'configuring target for authentication' 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 0 -eq 1 ']' 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@99 -- # rpc_cmd iscsi_target_node_set_auth -g 1 -r iqn.2016-06.io.spdk:disk1 00:08:22.718 19:46:51 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 0 -eq 1 ']' 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@106 -- # rpc_cmd iscsi_set_discovery_auth -r -g 1 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.719 executing discovery without adding credential to initiator - we expect failure 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:08:22.719 configuring initiator with biderectional authentication 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@28 -- # echo 'configuring initiator with biderectional authentication' 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@29 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:08:22.719 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:08:22.977 iscsiadm: No matching sessions found 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # true 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:08:22.977 iscsiadm: No records found 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # true 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:08:22.977 19:46:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:08:26.337 19:46:54 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:08:26.337 19:46:54 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@116 -- # '[' 1 -eq 1 ']' 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@117 -- # sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@118 -- # sed -i 's/#node.session.auth.username =.*/node.session.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@119 -- # sed -i 's/#node.session.auth.password =.*/node.session.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' 1 -eq 1 ']' 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n 321978654321 ']' 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n mchapo ']' 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@121 -- # sed -i 's/#node.session.auth.username_in =.*/node.session.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@122 -- # sed -i 's/#node.session.auth.password_in =.*/node.session.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:08:26.905 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:08:27.163 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:08:27.163 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:08:27.163 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:08:27.163 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:08:27.163 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:08:27.163 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@135 -- # restart_iscsid 00:08:27.163 19:46:55 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:08:30.444 19:46:58 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:08:30.444 19:46:58 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:08:31.010 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:08:31.010 executing discovery - target should not be discovered since the -m option was not used 00:08:31.010 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@30 -- # echo 'executing discovery - target should not be discovered since the -m option was not used' 00:08:31.010 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@31 -- # rc=0 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:31.011 [2024-07-24 19:46:59.668401] iscsi.c: 982:iscsi_auth_params: *ERROR*: Initiator wants to use mutual CHAP for security, but it's not enabled. 00:08:31.011 [2024-07-24 19:46:59.668465] iscsi.c:1957:iscsi_op_login_rsp_handle_csg_bit: *ERROR*: iscsi_auth_params() failed 00:08:31.011 iscsiadm: Login failed to authenticate with target 00:08:31.011 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:08:31.011 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:08:31.011 configuring target for authentication with the -m option 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # rc=24 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@33 -- # '[' 24 -eq 0 ']' 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@37 -- # echo 'configuring target for authentication with the -m option' 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@38 -- # config_chap_credentials_for_target -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=2 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:31.011 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 2 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 2 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 1 -eq 1 ']' 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@97 -- # rpc_cmd iscsi_target_node_set_auth -g 2 -r -m iqn.2016-06.io.spdk:disk1 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 2 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:31.269 executing discovery: 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@39 -- # echo 'executing discovery:' 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@40 -- # rc=0 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@41 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:31.269 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:08:31.269 executing login: 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@42 -- # '[' 0 -ne 0 ']' 00:08:31.269 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@46 -- # echo 'executing login:' 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@47 -- # rc=0 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@48 -- # iscsiadm -m node -l -p 10.0.0.1:3260 00:08:31.270 Logging in to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:08:31.270 Login to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:08:31.270 DONE 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@49 -- # '[' 0 -ne 0 ']' 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@54 -- # echo DONE 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@55 -- # default_initiator_chap_credentials 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:08:31.270 Logging out of session [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:08:31.270 Logout of [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:08:31.270 19:46:59 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:08:34.552 19:47:02 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:08:34.552 19:47:02 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@57 -- # trap - SIGINT SIGTERM EXIT 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@59 -- # killprocess 66179 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@950 -- # '[' -z 66179 ']' 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # kill -0 66179 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@955 -- # uname 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66179 00:08:35.486 killing process with pid 66179 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66179' 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@969 -- # kill 66179 00:08:35.486 19:47:03 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@974 -- # wait 66179 00:08:36.052 19:47:04 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@61 -- # iscsitestfini 00:08:36.052 19:47:04 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:08:36.052 00:08:36.052 real 0m15.768s 00:08:36.052 user 0m15.771s 00:08:36.052 sys 0m0.805s 00:08:36.052 ************************************ 00:08:36.052 END TEST chap_mutual_auth 00:08:36.052 ************************************ 00:08:36.052 19:47:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.052 19:47:04 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:08:36.052 19:47:04 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@34 -- # run_test iscsi_tgt_reset /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:08:36.052 19:47:04 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.052 19:47:04 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.052 19:47:04 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:36.052 ************************************ 00:08:36.052 START TEST iscsi_tgt_reset 00:08:36.052 ************************************ 00:08:36.052 19:47:04 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:08:36.311 * Looking for test storage... 00:08:36.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@11 -- # iscsitestinit 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@18 -- # hash sg_reset 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@22 -- # timing_enter start_iscsi_tgt 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:36.311 Process pid: 66484 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@25 -- # pid=66484 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@26 -- # echo 'Process pid: 66484' 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@28 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@30 -- # waitforlisten 66484 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@831 -- # '[' -z 66484 ']' 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.311 19:47:04 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.312 19:47:04 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.312 19:47:04 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.312 19:47:04 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:36.312 [2024-07-24 19:47:04.818887] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:36.312 [2024-07-24 19:47:04.818999] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66484 ] 00:08:36.312 [2024-07-24 19:47:04.958685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.570 [2024-07-24 19:47:05.068194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.504 19:47:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.504 19:47:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@864 -- # return 0 00:08:37.504 19:47:05 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@31 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:08:37.504 19:47:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.504 19:47:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:37.504 19:47:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.504 19:47:05 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@32 -- # rpc_cmd framework_start_init 00:08:37.504 19:47:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.504 19:47:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:37.504 [2024-07-24 19:47:05.919207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:37.504 iscsi_tgt is listening. Running tests... 00:08:37.504 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.504 19:47:06 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@33 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:37.504 19:47:06 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@35 -- # timing_exit start_iscsi_tgt 00:08:37.504 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:37.504 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@37 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@38 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@39 -- # rpc_cmd bdev_malloc_create 64 512 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:37.765 Malloc0 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@44 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.765 19:47:06 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@45 -- # sleep 1 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@47 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:38.714 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@48 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:38.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:08:38.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@49 -- # waitforiscsidevices 1 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:38.714 [2024-07-24 19:47:07.331851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # n=1 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@123 -- # return 0 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # grep 'Attached scsi disk' 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # iscsiadm -m session -P 3 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # awk '{print $4}' 00:08:38.714 FIO pid: 66547 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # dev=sda 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@54 -- # fiopid=66547 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 60 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@55 -- # echo 'FIO pid: 66547' 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@57 -- # trap 'iscsicleanup; killprocess $pid; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:08:38.714 19:47:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:08:38.973 [global] 00:08:38.973 thread=1 00:08:38.973 invalidate=1 00:08:38.973 rw=read 00:08:38.973 time_based=1 00:08:38.973 runtime=60 00:08:38.973 ioengine=libaio 00:08:38.973 direct=1 00:08:38.973 bs=512 00:08:38.973 iodepth=1 00:08:38.973 norandommap=1 00:08:38.973 numjobs=1 00:08:38.973 00:08:38.973 [job0] 00:08:38.973 filename=/dev/sda 00:08:38.973 queue_depth set to 113 (sda) 00:08:38.973 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:08:38.973 fio-3.35 00:08:38.973 Starting 1 thread 00:08:39.910 19:47:08 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 66484 00:08:39.910 19:47:08 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 66547 00:08:39.910 19:47:08 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:08:39.910 [2024-07-24 19:47:08.360977] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:08:39.910 [2024-07-24 19:47:08.361068] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:08:39.910 19:47:08 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:08:39.910 [2024-07-24 19:47:08.362925] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:40.854 19:47:09 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 66484 00:08:40.854 19:47:09 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 66547 00:08:40.855 19:47:09 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:08:40.855 19:47:09 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:08:41.790 19:47:10 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 66484 00:08:41.790 19:47:10 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 66547 00:08:41.790 19:47:10 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:08:41.790 [2024-07-24 19:47:10.376063] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:08:41.790 [2024-07-24 19:47:10.376172] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:08:41.790 [2024-07-24 19:47:10.377470] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:41.790 19:47:10 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:08:42.727 19:47:11 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 66484 00:08:42.727 19:47:11 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 66547 00:08:42.727 19:47:11 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:08:42.727 19:47:11 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:08:44.100 19:47:12 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 66484 00:08:44.100 19:47:12 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 66547 00:08:44.100 19:47:12 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:08:44.100 [2024-07-24 19:47:12.390590] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:08:44.100 [2024-07-24 19:47:12.390677] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:08:44.100 [2024-07-24 19:47:12.392187] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:44.100 19:47:12 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 66484 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 66547 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@70 -- # kill 66547 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # wait 66547 00:08:45.036 Cleaning up iSCSI connection 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # true 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@73 -- # trap - SIGINT SIGTERM EXIT 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@75 -- # iscsicleanup 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:08:45.036 fio: io_u error on file /dev/sda: No such device: read offset=46714368, buflen=512 00:08:45.036 fio: pid=66579, err=19/file:io_u.c:1889, func=io_u error, error=No such device 00:08:45.036 00:08:45.036 job0: (groupid=0, jobs=1): err=19 (file:io_u.c:1889, func=io_u error, error=No such device): pid=66579: Wed Jul 24 19:47:13 2024 00:08:45.036 read: IOPS=16.0k, BW=7982KiB/s (8174kB/s)(44.5MiB/5715msec) 00:08:45.036 slat (usec): min=3, max=1109, avg= 6.03, stdev= 5.69 00:08:45.036 clat (nsec): min=1862, max=690810, avg=55847.52, stdev=10798.11 00:08:45.036 lat (usec): min=44, max=696, avg=61.87, stdev=11.75 00:08:45.036 clat percentiles (usec): 00:08:45.036 | 1.00th=[ 41], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 50], 00:08:45.036 | 30.00th=[ 51], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 57], 00:08:45.036 | 70.00th=[ 58], 80.00th=[ 61], 90.00th=[ 67], 95.00th=[ 72], 00:08:45.036 | 99.00th=[ 89], 99.50th=[ 99], 99.90th=[ 155], 99.95th=[ 204], 00:08:45.036 | 99.99th=[ 318] 00:08:45.036 bw ( KiB/s): min= 7346, max= 8800, per=100.00%, avg=8007.91, stdev=425.96, samples=11 00:08:45.036 iops : min=14692, max=17600, avg=16016.00, stdev=851.96, samples=11 00:08:45.036 lat (usec) : 2=0.01%, 4=0.03%, 10=0.01%, 20=0.01%, 50=25.28% 00:08:45.036 lat (usec) : 100=74.24%, 250=0.43%, 500=0.03%, 750=0.01% 00:08:45.036 cpu : usr=5.67%, sys=14.16%, ctx=92870, majf=0, minf=2 00:08:45.036 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:45.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.036 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.036 issued rwts: total=91240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.036 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:45.036 00:08:45.036 Run status group 0 (all jobs): 00:08:45.036 READ: bw=7982KiB/s (8174kB/s), 7982KiB/s-7982KiB/s (8174kB/s-8174kB/s), io=44.5MiB (46.7MB), run=5715-5715msec 00:08:45.036 00:08:45.036 Disk stats (read/write): 00:08:45.036 sda: ios=90048/0, merge=0/0, ticks=4739/0, in_queue=4739, util=98.41% 00:08:45.036 Logging out of session [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:08:45.036 Logout of [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@985 -- # rm -rf 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@76 -- # killprocess 66484 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@950 -- # '[' -z 66484 ']' 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # kill -0 66484 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@955 -- # uname 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66484 00:08:45.036 killing process with pid 66484 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66484' 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@969 -- # kill 66484 00:08:45.036 19:47:13 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@974 -- # wait 66484 00:08:45.603 19:47:14 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@77 -- # iscsitestfini 00:08:45.603 19:47:14 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:08:45.603 ************************************ 00:08:45.603 END TEST iscsi_tgt_reset 00:08:45.603 ************************************ 00:08:45.603 00:08:45.603 real 0m9.476s 00:08:45.603 user 0m6.844s 00:08:45.603 sys 0m2.489s 00:08:45.603 19:47:14 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.603 19:47:14 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:08:45.603 19:47:14 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@35 -- # run_test iscsi_tgt_rpc_config /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:08:45.603 19:47:14 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:45.603 19:47:14 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.603 19:47:14 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:45.603 ************************************ 00:08:45.603 START TEST iscsi_tgt_rpc_config 00:08:45.603 ************************************ 00:08:45.603 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:08:45.861 * Looking for test storage... 00:08:45.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@11 -- # iscsitestinit 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@16 -- # rpc_config_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@18 -- # timing_enter start_iscsi_tgt 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@21 -- # pid=66728 00:08:45.861 Process pid: 66728 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@22 -- # echo 'Process pid: 66728' 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@20 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@24 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@26 -- # waitforlisten 66728 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@831 -- # '[' -z 66728 ']' 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.861 19:47:14 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.861 [2024-07-24 19:47:14.373360] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:45.861 [2024-07-24 19:47:14.373461] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66728 ] 00:08:45.861 [2024-07-24 19:47:14.514616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.120 [2024-07-24 19:47:14.698704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.052 19:47:15 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.052 19:47:15 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@864 -- # return 0 00:08:47.052 19:47:15 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@28 -- # rpc_wait_pid=66744 00:08:47.052 19:47:15 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:08:47.052 19:47:15 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:08:47.052 19:47:15 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@32 -- # ps 66744 00:08:47.052 PID TTY STAT TIME COMMAND 00:08:47.052 66744 ? R 0:00 python3 /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:08:47.052 19:47:15 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:08:47.310 [2024-07-24 19:47:15.944014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:47.567 19:47:16 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@35 -- # sleep 1 00:08:48.942 iscsi_tgt is listening. Running tests... 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@36 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@39 -- # NOT ps 66744 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # local es=0 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@652 -- # valid_exec_arg ps 66744 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@638 -- # local arg=ps 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -t ps 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # type -P ps 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # arg=/usr/bin/ps 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/ps ]] 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@653 -- # ps 66744 00:08:48.942 PID TTY STAT TIME COMMAND 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@653 -- # es=1 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@43 -- # rpc_wait_pid=66769 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:08:48.942 19:47:17 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@44 -- # sleep 1 00:08:49.628 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@45 -- # NOT ps 66769 00:08:49.628 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # local es=0 00:08:49.628 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@652 -- # valid_exec_arg ps 66769 00:08:49.628 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@638 -- # local arg=ps 00:08:49.628 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.628 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -t ps 00:08:49.628 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.628 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # type -P ps 00:08:49.628 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.628 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # arg=/usr/bin/ps 00:08:49.628 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/ps ]] 00:08:49.628 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@653 -- # ps 66769 00:08:49.895 PID TTY STAT TIME COMMAND 00:08:49.895 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@653 -- # es=1 00:08:49.895 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:49.895 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:49.895 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:49.895 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@47 -- # timing_exit start_iscsi_tgt 00:08:49.895 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.895 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:08:49.895 19:47:18 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@49 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py /home/vagrant/spdk_repo/spdk/scripts/rpc.py 10.0.0.1 10.0.0.2 3260 10.0.0.2/32 spdk_iscsi_ns 00:09:16.464 [2024-07-24 19:47:44.334364] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:18.995 [2024-07-24 19:47:47.361744] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:20.370 verify_log_flag_rpc_methods passed 00:09:20.370 create_malloc_bdevs_rpc_methods passed 00:09:20.370 verify_portal_groups_rpc_methods passed 00:09:20.370 verify_initiator_groups_rpc_method passed. 00:09:20.370 This issue will be fixed later. 00:09:20.370 verify_target_nodes_rpc_methods passed. 00:09:20.370 verify_scsi_devices_rpc_methods passed 00:09:20.370 verify_iscsi_connection_rpc_methods passed 00:09:20.370 19:47:48 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:09:20.629 [ 00:09:20.629 { 00:09:20.629 "name": "Malloc0", 00:09:20.629 "aliases": [ 00:09:20.629 "49c1d52b-89b2-4511-96fb-de9d65ea6713" 00:09:20.629 ], 00:09:20.629 "product_name": "Malloc disk", 00:09:20.629 "block_size": 512, 00:09:20.629 "num_blocks": 131072, 00:09:20.629 "uuid": "49c1d52b-89b2-4511-96fb-de9d65ea6713", 00:09:20.629 "assigned_rate_limits": { 00:09:20.629 "rw_ios_per_sec": 0, 00:09:20.629 "rw_mbytes_per_sec": 0, 00:09:20.629 "r_mbytes_per_sec": 0, 00:09:20.629 "w_mbytes_per_sec": 0 00:09:20.629 }, 00:09:20.629 "claimed": false, 00:09:20.629 "zoned": false, 00:09:20.629 "supported_io_types": { 00:09:20.629 "read": true, 00:09:20.629 "write": true, 00:09:20.629 "unmap": true, 00:09:20.629 "flush": true, 00:09:20.629 "reset": true, 00:09:20.629 "nvme_admin": false, 00:09:20.629 "nvme_io": false, 00:09:20.629 "nvme_io_md": false, 00:09:20.629 "write_zeroes": true, 00:09:20.629 "zcopy": true, 00:09:20.629 "get_zone_info": false, 00:09:20.629 "zone_management": false, 00:09:20.629 "zone_append": false, 00:09:20.629 "compare": false, 00:09:20.629 "compare_and_write": false, 00:09:20.629 "abort": true, 00:09:20.629 "seek_hole": false, 00:09:20.629 "seek_data": false, 00:09:20.629 "copy": true, 00:09:20.629 "nvme_iov_md": false 00:09:20.629 }, 00:09:20.629 "memory_domains": [ 00:09:20.629 { 00:09:20.629 "dma_device_id": "system", 00:09:20.629 "dma_device_type": 1 00:09:20.629 }, 00:09:20.629 { 00:09:20.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.629 "dma_device_type": 2 00:09:20.629 } 00:09:20.629 ], 00:09:20.629 "driver_specific": {} 00:09:20.629 }, 00:09:20.629 { 00:09:20.629 "name": "Malloc1", 00:09:20.629 "aliases": [ 00:09:20.629 "61d20bac-5cb0-45da-befa-ba6769f45202" 00:09:20.629 ], 00:09:20.629 "product_name": "Malloc disk", 00:09:20.629 "block_size": 512, 00:09:20.629 "num_blocks": 131072, 00:09:20.629 "uuid": "61d20bac-5cb0-45da-befa-ba6769f45202", 00:09:20.629 "assigned_rate_limits": { 00:09:20.629 "rw_ios_per_sec": 0, 00:09:20.630 "rw_mbytes_per_sec": 0, 00:09:20.630 "r_mbytes_per_sec": 0, 00:09:20.630 "w_mbytes_per_sec": 0 00:09:20.630 }, 00:09:20.630 "claimed": false, 00:09:20.630 "zoned": false, 00:09:20.630 "supported_io_types": { 00:09:20.630 "read": true, 00:09:20.630 "write": true, 00:09:20.630 "unmap": true, 00:09:20.630 "flush": true, 00:09:20.630 "reset": true, 00:09:20.630 "nvme_admin": false, 00:09:20.630 "nvme_io": false, 00:09:20.630 "nvme_io_md": false, 00:09:20.630 "write_zeroes": true, 00:09:20.630 "zcopy": true, 00:09:20.630 "get_zone_info": false, 00:09:20.630 "zone_management": false, 00:09:20.630 "zone_append": false, 00:09:20.630 "compare": false, 00:09:20.630 "compare_and_write": false, 00:09:20.630 "abort": true, 00:09:20.630 "seek_hole": false, 00:09:20.630 "seek_data": false, 00:09:20.630 "copy": true, 00:09:20.630 "nvme_iov_md": false 00:09:20.630 }, 00:09:20.630 "memory_domains": [ 00:09:20.630 { 00:09:20.630 "dma_device_id": "system", 00:09:20.630 "dma_device_type": 1 00:09:20.630 }, 00:09:20.630 { 00:09:20.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.630 "dma_device_type": 2 00:09:20.630 } 00:09:20.630 ], 00:09:20.630 "driver_specific": {} 00:09:20.630 }, 00:09:20.630 { 00:09:20.630 "name": "Malloc2", 00:09:20.630 "aliases": [ 00:09:20.630 "a39277bc-3b50-4818-9572-e818ec71cb57" 00:09:20.630 ], 00:09:20.630 "product_name": "Malloc disk", 00:09:20.630 "block_size": 512, 00:09:20.630 "num_blocks": 131072, 00:09:20.630 "uuid": "a39277bc-3b50-4818-9572-e818ec71cb57", 00:09:20.630 "assigned_rate_limits": { 00:09:20.630 "rw_ios_per_sec": 0, 00:09:20.630 "rw_mbytes_per_sec": 0, 00:09:20.630 "r_mbytes_per_sec": 0, 00:09:20.630 "w_mbytes_per_sec": 0 00:09:20.630 }, 00:09:20.630 "claimed": false, 00:09:20.630 "zoned": false, 00:09:20.630 "supported_io_types": { 00:09:20.630 "read": true, 00:09:20.630 "write": true, 00:09:20.630 "unmap": true, 00:09:20.630 "flush": true, 00:09:20.630 "reset": true, 00:09:20.630 "nvme_admin": false, 00:09:20.630 "nvme_io": false, 00:09:20.630 "nvme_io_md": false, 00:09:20.630 "write_zeroes": true, 00:09:20.630 "zcopy": true, 00:09:20.630 "get_zone_info": false, 00:09:20.630 "zone_management": false, 00:09:20.630 "zone_append": false, 00:09:20.630 "compare": false, 00:09:20.630 "compare_and_write": false, 00:09:20.630 "abort": true, 00:09:20.630 "seek_hole": false, 00:09:20.630 "seek_data": false, 00:09:20.630 "copy": true, 00:09:20.630 "nvme_iov_md": false 00:09:20.630 }, 00:09:20.630 "memory_domains": [ 00:09:20.630 { 00:09:20.630 "dma_device_id": "system", 00:09:20.630 "dma_device_type": 1 00:09:20.630 }, 00:09:20.630 { 00:09:20.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.630 "dma_device_type": 2 00:09:20.630 } 00:09:20.630 ], 00:09:20.630 "driver_specific": {} 00:09:20.630 }, 00:09:20.630 { 00:09:20.630 "name": "Malloc3", 00:09:20.630 "aliases": [ 00:09:20.630 "bb43055f-5e5e-4852-8bb2-523f290989c6" 00:09:20.630 ], 00:09:20.630 "product_name": "Malloc disk", 00:09:20.630 "block_size": 512, 00:09:20.630 "num_blocks": 131072, 00:09:20.630 "uuid": "bb43055f-5e5e-4852-8bb2-523f290989c6", 00:09:20.630 "assigned_rate_limits": { 00:09:20.630 "rw_ios_per_sec": 0, 00:09:20.630 "rw_mbytes_per_sec": 0, 00:09:20.630 "r_mbytes_per_sec": 0, 00:09:20.630 "w_mbytes_per_sec": 0 00:09:20.630 }, 00:09:20.630 "claimed": false, 00:09:20.630 "zoned": false, 00:09:20.630 "supported_io_types": { 00:09:20.630 "read": true, 00:09:20.630 "write": true, 00:09:20.630 "unmap": true, 00:09:20.630 "flush": true, 00:09:20.630 "reset": true, 00:09:20.630 "nvme_admin": false, 00:09:20.630 "nvme_io": false, 00:09:20.630 "nvme_io_md": false, 00:09:20.630 "write_zeroes": true, 00:09:20.630 "zcopy": true, 00:09:20.630 "get_zone_info": false, 00:09:20.630 "zone_management": false, 00:09:20.630 "zone_append": false, 00:09:20.630 "compare": false, 00:09:20.630 "compare_and_write": false, 00:09:20.630 "abort": true, 00:09:20.630 "seek_hole": false, 00:09:20.630 "seek_data": false, 00:09:20.630 "copy": true, 00:09:20.630 "nvme_iov_md": false 00:09:20.630 }, 00:09:20.630 "memory_domains": [ 00:09:20.630 { 00:09:20.630 "dma_device_id": "system", 00:09:20.630 "dma_device_type": 1 00:09:20.630 }, 00:09:20.630 { 00:09:20.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.630 "dma_device_type": 2 00:09:20.630 } 00:09:20.630 ], 00:09:20.630 "driver_specific": {} 00:09:20.630 }, 00:09:20.630 { 00:09:20.630 "name": "Malloc4", 00:09:20.630 "aliases": [ 00:09:20.630 "893da86b-bc9a-4469-b4d3-286e4e6c55cc" 00:09:20.630 ], 00:09:20.630 "product_name": "Malloc disk", 00:09:20.630 "block_size": 512, 00:09:20.630 "num_blocks": 131072, 00:09:20.630 "uuid": "893da86b-bc9a-4469-b4d3-286e4e6c55cc", 00:09:20.630 "assigned_rate_limits": { 00:09:20.630 "rw_ios_per_sec": 0, 00:09:20.630 "rw_mbytes_per_sec": 0, 00:09:20.630 "r_mbytes_per_sec": 0, 00:09:20.630 "w_mbytes_per_sec": 0 00:09:20.630 }, 00:09:20.630 "claimed": false, 00:09:20.630 "zoned": false, 00:09:20.630 "supported_io_types": { 00:09:20.630 "read": true, 00:09:20.630 "write": true, 00:09:20.630 "unmap": true, 00:09:20.630 "flush": true, 00:09:20.630 "reset": true, 00:09:20.630 "nvme_admin": false, 00:09:20.630 "nvme_io": false, 00:09:20.630 "nvme_io_md": false, 00:09:20.630 "write_zeroes": true, 00:09:20.630 "zcopy": true, 00:09:20.630 "get_zone_info": false, 00:09:20.630 "zone_management": false, 00:09:20.630 "zone_append": false, 00:09:20.630 "compare": false, 00:09:20.630 "compare_and_write": false, 00:09:20.630 "abort": true, 00:09:20.630 "seek_hole": false, 00:09:20.630 "seek_data": false, 00:09:20.630 "copy": true, 00:09:20.630 "nvme_iov_md": false 00:09:20.630 }, 00:09:20.630 "memory_domains": [ 00:09:20.630 { 00:09:20.630 "dma_device_id": "system", 00:09:20.630 "dma_device_type": 1 00:09:20.630 }, 00:09:20.630 { 00:09:20.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.630 "dma_device_type": 2 00:09:20.630 } 00:09:20.630 ], 00:09:20.630 "driver_specific": {} 00:09:20.630 }, 00:09:20.630 { 00:09:20.630 "name": "Malloc5", 00:09:20.630 "aliases": [ 00:09:20.630 "51431c26-5ee1-430e-8dfb-e2a4350eb0e8" 00:09:20.630 ], 00:09:20.630 "product_name": "Malloc disk", 00:09:20.630 "block_size": 512, 00:09:20.630 "num_blocks": 131072, 00:09:20.630 "uuid": "51431c26-5ee1-430e-8dfb-e2a4350eb0e8", 00:09:20.630 "assigned_rate_limits": { 00:09:20.630 "rw_ios_per_sec": 0, 00:09:20.630 "rw_mbytes_per_sec": 0, 00:09:20.630 "r_mbytes_per_sec": 0, 00:09:20.630 "w_mbytes_per_sec": 0 00:09:20.630 }, 00:09:20.630 "claimed": false, 00:09:20.630 "zoned": false, 00:09:20.630 "supported_io_types": { 00:09:20.630 "read": true, 00:09:20.630 "write": true, 00:09:20.630 "unmap": true, 00:09:20.630 "flush": true, 00:09:20.630 "reset": true, 00:09:20.630 "nvme_admin": false, 00:09:20.630 "nvme_io": false, 00:09:20.630 "nvme_io_md": false, 00:09:20.630 "write_zeroes": true, 00:09:20.630 "zcopy": true, 00:09:20.630 "get_zone_info": false, 00:09:20.630 "zone_management": false, 00:09:20.630 "zone_append": false, 00:09:20.630 "compare": false, 00:09:20.630 "compare_and_write": false, 00:09:20.630 "abort": true, 00:09:20.630 "seek_hole": false, 00:09:20.630 "seek_data": false, 00:09:20.630 "copy": true, 00:09:20.630 "nvme_iov_md": false 00:09:20.630 }, 00:09:20.630 "memory_domains": [ 00:09:20.630 { 00:09:20.630 "dma_device_id": "system", 00:09:20.630 "dma_device_type": 1 00:09:20.630 }, 00:09:20.630 { 00:09:20.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.630 "dma_device_type": 2 00:09:20.630 } 00:09:20.630 ], 00:09:20.630 "driver_specific": {} 00:09:20.630 } 00:09:20.630 ] 00:09:20.630 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:09:20.630 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@55 -- # iscsicleanup 00:09:20.630 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:09:20.630 Cleaning up iSCSI connection 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:09:20.631 iscsiadm: No matching sessions found 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # true 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:09:20.631 iscsiadm: No records found 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@984 -- # true 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@985 -- # rm -rf 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@56 -- # killprocess 66728 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@950 -- # '[' -z 66728 ']' 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # kill -0 66728 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@955 -- # uname 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66728 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.631 killing process with pid 66728 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66728' 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@969 -- # kill 66728 00:09:20.631 19:47:49 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@974 -- # wait 66728 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@58 -- # iscsitestfini 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:09:21.566 00:09:21.566 real 0m35.828s 00:09:21.566 user 1m0.627s 00:09:21.566 sys 0m6.004s 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.566 ************************************ 00:09:21.566 END TEST iscsi_tgt_rpc_config 00:09:21.566 ************************************ 00:09:21.566 19:47:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@36 -- # run_test iscsi_tgt_iscsi_lvol /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:09:21.566 19:47:50 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:21.566 19:47:50 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.566 19:47:50 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:21.566 ************************************ 00:09:21.566 START TEST iscsi_tgt_iscsi_lvol 00:09:21.566 ************************************ 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:09:21.566 * Looking for test storage... 00:09:21.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@11 -- # iscsitestinit 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@13 -- # MALLOC_BDEV_SIZE=128 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@15 -- # '[' 0 -eq 1 ']' 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@19 -- # NUM_LVS=2 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@20 -- # NUM_LVOL=2 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@23 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@24 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@26 -- # timing_enter start_iscsi_tgt 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@29 -- # pid=67347 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@30 -- # echo 'Process pid: 67347' 00:09:21.566 Process pid: 67347 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@32 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@28 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@34 -- # waitforlisten 67347 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@831 -- # '[' -z 67347 ']' 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.566 19:47:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:21.825 [2024-07-24 19:47:50.235643] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:09:21.825 [2024-07-24 19:47:50.235737] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67347 ] 00:09:21.825 [2024-07-24 19:47:50.377514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.135 [2024-07-24 19:47:50.540566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.135 [2024-07-24 19:47:50.540637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.135 [2024-07-24 19:47:50.540708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.135 [2024-07-24 19:47:50.540719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.707 19:47:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.707 19:47:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:22.707 19:47:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:09:22.965 19:47:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:09:23.533 [2024-07-24 19:47:51.980628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:23.792 iscsi_tgt is listening. Running tests... 00:09:23.792 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@37 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:23.792 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@39 -- # timing_exit start_iscsi_tgt 00:09:23.792 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:23.792 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:23.792 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@41 -- # timing_enter setup 00:09:23.792 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:23.792 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:23.792 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:09:24.051 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # seq 1 2 00:09:24.051 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:09:24.051 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=3 00:09:24.051 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 3 ANY 10.0.0.2/32 00:09:24.310 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 1 -eq 1 ']' 00:09:24.310 19:47:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:09:24.876 19:47:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # malloc_bdevs='Malloc0 ' 00:09:24.876 19:47:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:09:25.135 19:47:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # malloc_bdevs+=Malloc1 00:09:25.135 19:47:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:25.393 19:47:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@53 -- # bdev=raid0 00:09:25.393 19:47:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs_1 -c 1048576 00:09:25.651 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=7badc2f4-1f11-420b-841d-aef359ffd749 00:09:25.651 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:09:25.651 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 2 00:09:25.651 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:09:25.651 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7badc2f4-1f11-420b-841d-aef359ffd749 lbd_1 10 00:09:25.909 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=680c7c8a-d433-445f-907c-2feaef4da0f3 00:09:25.909 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='680c7c8a-d433-445f-907c-2feaef4da0f3:0 ' 00:09:25.909 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:09:25.909 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7badc2f4-1f11-420b-841d-aef359ffd749 lbd_2 10 00:09:26.167 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6239364e-f0ce-4bdd-9c10-eee823d1bdb1 00:09:26.167 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6239364e-f0ce-4bdd-9c10-eee823d1bdb1:1 ' 00:09:26.167 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias '680c7c8a-d433-445f-907c-2feaef4da0f3:0 6239364e-f0ce-4bdd-9c10-eee823d1bdb1:1 ' 1:3 256 -d 00:09:26.426 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:09:26.426 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=4 00:09:26.426 19:47:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 4 ANY 10.0.0.2/32 00:09:26.684 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 2 -eq 1 ']' 00:09:26.684 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:09:26.942 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc2 00:09:26.942 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc2 lvs_2 -c 1048576 00:09:27.200 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=2c0778d8-8f3f-42c6-91c1-053448c0a795 00:09:27.200 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:09:27.200 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 2 00:09:27.200 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:09:27.200 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2c0778d8-8f3f-42c6-91c1-053448c0a795 lbd_1 10 00:09:27.457 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=806f7775-3f91-433f-b686-ceea38621cac 00:09:27.457 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='806f7775-3f91-433f-b686-ceea38621cac:0 ' 00:09:27.457 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:09:27.457 19:47:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2c0778d8-8f3f-42c6-91c1-053448c0a795 lbd_2 10 00:09:27.715 19:47:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e44b7b25-1e14-43aa-b7d2-4aae27c44786 00:09:27.715 19:47:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e44b7b25-1e14-43aa-b7d2-4aae27c44786:1 ' 00:09:27.715 19:47:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias '806f7775-3f91-433f-b686-ceea38621cac:0 e44b7b25-1e14-43aa-b7d2-4aae27c44786:1 ' 1:4 256 -d 00:09:27.974 19:47:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@66 -- # timing_exit setup 00:09:27.974 19:47:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:27.974 19:47:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:27.974 19:47:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@68 -- # sleep 1 00:09:28.909 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@70 -- # timing_enter discovery 00:09:28.909 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.909 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:28.909 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@71 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:09:28.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:09:28.910 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:09:28.910 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@72 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:09:29.168 [2024-07-24 19:47:57.637535] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:29.168 [2024-07-24 19:47:57.639563] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:29.168 [2024-07-24 19:47:57.642674] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:29.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:29.168 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:09:29.168 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:29.168 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@73 -- # waitforiscsidevices 4 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@116 -- # local num=4 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:29.168 [2024-07-24 19:47:57.675453] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # n=4 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@120 -- # '[' 4 -ne 4 ']' 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@123 -- # return 0 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@74 -- # timing_exit discovery 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@76 -- # timing_enter fio 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:29.168 19:47:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 8 -t randwrite -r 10 -v 00:09:29.168 [global] 00:09:29.168 thread=1 00:09:29.168 invalidate=1 00:09:29.168 rw=randwrite 00:09:29.168 time_based=1 00:09:29.168 runtime=10 00:09:29.168 ioengine=libaio 00:09:29.168 direct=1 00:09:29.168 bs=131072 00:09:29.168 iodepth=8 00:09:29.168 norandommap=0 00:09:29.168 numjobs=1 00:09:29.168 00:09:29.168 verify_dump=1 00:09:29.168 verify_backlog=512 00:09:29.168 verify_state_save=0 00:09:29.168 do_verify=1 00:09:29.168 verify=crc32c-intel 00:09:29.168 [job0] 00:09:29.168 filename=/dev/sdb 00:09:29.168 [job1] 00:09:29.168 filename=/dev/sdd 00:09:29.168 [job2] 00:09:29.168 filename=/dev/sda 00:09:29.168 [job3] 00:09:29.168 filename=/dev/sdc 00:09:29.427 queue_depth set to 113 (sdb) 00:09:29.427 queue_depth set to 113 (sdd) 00:09:29.427 queue_depth set to 113 (sda) 00:09:29.427 queue_depth set to 113 (sdc) 00:09:29.427 job0: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:09:29.427 job1: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:09:29.427 job2: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:09:29.427 job3: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:09:29.427 fio-3.35 00:09:29.427 Starting 4 threads 00:09:29.427 [2024-07-24 19:47:58.063823] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:29.427 [2024-07-24 19:47:58.068118] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:29.427 [2024-07-24 19:47:58.071897] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:29.427 [2024-07-24 19:47:58.075102] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:29.995 [2024-07-24 19:47:58.391080] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:29.995 [2024-07-24 19:47:58.404854] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:29.995 [2024-07-24 19:47:58.431326] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:29.995 [2024-07-24 19:47:58.451574] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:29.995 [2024-07-24 19:47:58.658888] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:30.252 [2024-07-24 19:47:58.694695] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:30.252 [2024-07-24 19:47:58.722918] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:30.252 [2024-07-24 19:47:58.736395] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:30.252 [2024-07-24 19:47:58.888398] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:30.252 [2024-07-24 19:47:58.904107] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:30.535 [2024-07-24 19:47:58.980243] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:30.535 [2024-07-24 19:47:58.998869] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:30.535 [2024-07-24 19:47:59.184964] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:30.793 [2024-07-24 19:47:59.206375] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:30.793 [2024-07-24 19:47:59.224264] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:30.793 [2024-07-24 19:47:59.241316] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.051 [2024-07-24 19:47:59.508659] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.051 [2024-07-24 19:47:59.543876] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.051 [2024-07-24 19:47:59.581582] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.051 [2024-07-24 19:47:59.669040] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.309 [2024-07-24 19:47:59.854764] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.309 [2024-07-24 19:47:59.875204] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.309 [2024-07-24 19:47:59.896623] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.309 [2024-07-24 19:47:59.917324] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.567 [2024-07-24 19:48:00.167524] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.567 [2024-07-24 19:48:00.196558] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.824 [2024-07-24 19:48:00.233367] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.824 [2024-07-24 19:48:00.288067] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:31.824 [2024-07-24 19:48:00.481172] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.081 [2024-07-24 19:48:00.510481] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.081 [2024-07-24 19:48:00.523694] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.081 [2024-07-24 19:48:00.555617] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.081 [2024-07-24 19:48:00.691822] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.340 [2024-07-24 19:48:00.748926] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.340 [2024-07-24 19:48:00.772354] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.340 [2024-07-24 19:48:00.802684] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.340 [2024-07-24 19:48:00.897193] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.340 [2024-07-24 19:48:00.961850] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.340 [2024-07-24 19:48:00.994633] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.598 [2024-07-24 19:48:01.066203] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.598 [2024-07-24 19:48:01.086858] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.598 [2024-07-24 19:48:01.114766] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.598 [2024-07-24 19:48:01.206219] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.856 [2024-07-24 19:48:01.316724] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.856 [2024-07-24 19:48:01.354941] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.856 [2024-07-24 19:48:01.396669] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:32.856 [2024-07-24 19:48:01.418389] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.115 [2024-07-24 19:48:01.622889] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.115 [2024-07-24 19:48:01.647383] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.115 [2024-07-24 19:48:01.677226] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.115 [2024-07-24 19:48:01.699556] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.373 [2024-07-24 19:48:01.869786] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.373 [2024-07-24 19:48:01.896737] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.373 [2024-07-24 19:48:01.929452] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.373 [2024-07-24 19:48:01.965462] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.631 [2024-07-24 19:48:02.143490] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.631 [2024-07-24 19:48:02.172298] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.631 [2024-07-24 19:48:02.237325] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.632 [2024-07-24 19:48:02.258790] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.889 [2024-07-24 19:48:02.406684] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.889 [2024-07-24 19:48:02.450750] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.889 [2024-07-24 19:48:02.475183] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.889 [2024-07-24 19:48:02.496040] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:33.889 [2024-07-24 19:48:02.518760] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.148 [2024-07-24 19:48:02.716627] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.148 [2024-07-24 19:48:02.746084] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.406 [2024-07-24 19:48:02.925522] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.406 [2024-07-24 19:48:02.953270] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.406 [2024-07-24 19:48:02.992286] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.406 [2024-07-24 19:48:03.023413] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.664 [2024-07-24 19:48:03.200081] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.664 [2024-07-24 19:48:03.236543] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.664 [2024-07-24 19:48:03.263108] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.664 [2024-07-24 19:48:03.298773] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.921 [2024-07-24 19:48:03.468090] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.921 [2024-07-24 19:48:03.485629] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.921 [2024-07-24 19:48:03.514591] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:34.921 [2024-07-24 19:48:03.541304] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.178 [2024-07-24 19:48:03.730154] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.178 [2024-07-24 19:48:03.762057] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.178 [2024-07-24 19:48:03.786921] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.178 [2024-07-24 19:48:03.818015] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.436 [2024-07-24 19:48:03.897542] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.436 [2024-07-24 19:48:03.947860] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.436 [2024-07-24 19:48:04.015820] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.436 [2024-07-24 19:48:04.039536] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.436 [2024-07-24 19:48:04.088761] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.710 [2024-07-24 19:48:04.145836] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.710 [2024-07-24 19:48:04.207097] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.710 [2024-07-24 19:48:04.235280] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.710 [2024-07-24 19:48:04.314877] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.710 [2024-07-24 19:48:04.358129] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.992 [2024-07-24 19:48:04.468584] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.992 [2024-07-24 19:48:04.498919] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.992 [2024-07-24 19:48:04.563494] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:35.992 [2024-07-24 19:48:04.616258] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.250 [2024-07-24 19:48:04.753310] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.250 [2024-07-24 19:48:04.787020] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.250 [2024-07-24 19:48:04.853938] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.508 [2024-07-24 19:48:04.917417] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.508 [2024-07-24 19:48:05.060665] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.508 [2024-07-24 19:48:05.095065] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.508 [2024-07-24 19:48:05.118138] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.508 [2024-07-24 19:48:05.158689] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.767 [2024-07-24 19:48:05.284636] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.767 [2024-07-24 19:48:05.351153] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.767 [2024-07-24 19:48:05.374557] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:36.767 [2024-07-24 19:48:05.403444] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.026 [2024-07-24 19:48:05.595356] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.026 [2024-07-24 19:48:05.662485] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.026 [2024-07-24 19:48:05.689727] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.284 [2024-07-24 19:48:05.767824] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.284 [2024-07-24 19:48:05.816268] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.284 [2024-07-24 19:48:05.934939] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.542 [2024-07-24 19:48:05.964872] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.542 [2024-07-24 19:48:05.989599] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.542 [2024-07-24 19:48:06.101395] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.800 [2024-07-24 19:48:06.214410] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.800 [2024-07-24 19:48:06.236693] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.800 [2024-07-24 19:48:06.259063] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.800 [2024-07-24 19:48:06.296686] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.800 [2024-07-24 19:48:06.322559] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:37.800 [2024-07-24 19:48:06.376670] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.059 [2024-07-24 19:48:06.479378] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.059 [2024-07-24 19:48:06.646640] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.059 [2024-07-24 19:48:06.666095] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.317 [2024-07-24 19:48:06.738148] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.317 [2024-07-24 19:48:06.772690] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.317 [2024-07-24 19:48:06.928146] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.317 [2024-07-24 19:48:06.953071] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.317 [2024-07-24 19:48:06.978090] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.576 [2024-07-24 19:48:07.035980] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.576 [2024-07-24 19:48:07.155998] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.576 [2024-07-24 19:48:07.179991] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.576 [2024-07-24 19:48:07.213302] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.833 [2024-07-24 19:48:07.335163] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.833 [2024-07-24 19:48:07.377099] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.833 [2024-07-24 19:48:07.420440] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:38.833 [2024-07-24 19:48:07.458174] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.092 [2024-07-24 19:48:07.611532] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.092 [2024-07-24 19:48:07.642112] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.092 [2024-07-24 19:48:07.687548] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.092 [2024-07-24 19:48:07.716974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.351 [2024-07-24 19:48:07.849251] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.351 [2024-07-24 19:48:07.876664] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.351 [2024-07-24 19:48:07.901498] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.351 [2024-07-24 19:48:07.986594] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.611 [2024-07-24 19:48:08.128461] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.611 [2024-07-24 19:48:08.160192] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.611 [2024-07-24 19:48:08.185761] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.611 [2024-07-24 19:48:08.208536] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.611 [2024-07-24 19:48:08.212000] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:39.611 00:09:39.611 job0: (groupid=0, jobs=1): err= 0: pid=67629: Wed Jul 24 19:48:08 2024 00:09:39.611 read: IOPS=842, BW=105MiB/s (110MB/s)(1030MiB/9780msec) 00:09:39.611 slat (usec): min=6, max=2214, avg=18.92, stdev=49.35 00:09:39.611 clat (usec): min=395, max=13259, avg=3420.78, stdev=1190.58 00:09:39.611 lat (usec): min=407, max=13276, avg=3439.71, stdev=1188.12 00:09:39.611 clat percentiles (usec): 00:09:39.611 | 1.00th=[ 1221], 5.00th=[ 1762], 10.00th=[ 2180], 20.00th=[ 2540], 00:09:39.611 | 30.00th=[ 2769], 40.00th=[ 2999], 50.00th=[ 3228], 60.00th=[ 3490], 00:09:39.611 | 70.00th=[ 3785], 80.00th=[ 4228], 90.00th=[ 4948], 95.00th=[ 5604], 00:09:39.611 | 99.00th=[ 7308], 99.50th=[ 7832], 99.90th=[ 9503], 99.95th=[ 9765], 00:09:39.611 | 99.99th=[13304] 00:09:39.611 write: IOPS=1307, BW=163MiB/s (171MB/s)(1040MiB/6362msec); 0 zone resets 00:09:39.611 slat (usec): min=32, max=26586, avg=98.15, stdev=483.60 00:09:39.611 clat (usec): min=666, max=36152, avg=5939.93, stdev=2462.52 00:09:39.611 lat (usec): min=734, max=36224, avg=6038.08, stdev=2499.50 00:09:39.611 clat percentiles (usec): 00:09:39.611 | 1.00th=[ 2147], 5.00th=[ 3163], 10.00th=[ 3720], 20.00th=[ 4359], 00:09:39.611 | 30.00th=[ 4817], 40.00th=[ 5145], 50.00th=[ 5604], 60.00th=[ 5932], 00:09:39.611 | 70.00th=[ 6325], 80.00th=[ 6915], 90.00th=[ 8455], 95.00th=[10421], 00:09:39.611 | 99.00th=[14091], 99.50th=[16057], 99.90th=[27657], 99.95th=[27919], 00:09:39.611 | 99.99th=[35914] 00:09:39.611 bw ( KiB/s): min=72960, max=122880, per=16.01%, avg=106337.63, stdev=14461.46, samples=19 00:09:39.611 iops : min= 570, max= 960, avg=830.74, stdev=112.98, samples=19 00:09:39.611 lat (usec) : 500=0.02%, 750=0.04%, 1000=0.14% 00:09:39.611 lat (msec) : 2=3.85%, 4=40.76%, 10=51.91%, 20=3.09%, 50=0.18% 00:09:39.611 cpu : usr=7.34%, sys=3.16%, ctx=12975, majf=0, minf=1 00:09:39.611 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.611 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.611 issued rwts: total=8240,8320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.611 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:39.611 job1: (groupid=0, jobs=1): err= 0: pid=67630: Wed Jul 24 19:48:08 2024 00:09:39.611 read: IOPS=834, BW=104MiB/s (109MB/s)(1020MiB/9779msec) 00:09:39.611 slat (usec): min=5, max=3087, avg=20.77, stdev=69.10 00:09:39.611 clat (usec): min=238, max=16384, avg=3582.95, stdev=1380.12 00:09:39.611 lat (usec): min=401, max=16394, avg=3603.72, stdev=1377.20 00:09:39.611 clat percentiles (usec): 00:09:39.611 | 1.00th=[ 1188], 5.00th=[ 1795], 10.00th=[ 2278], 20.00th=[ 2638], 00:09:39.611 | 30.00th=[ 2868], 40.00th=[ 3097], 50.00th=[ 3326], 60.00th=[ 3621], 00:09:39.611 | 70.00th=[ 3982], 80.00th=[ 4424], 90.00th=[ 5145], 95.00th=[ 5997], 00:09:39.611 | 99.00th=[ 8455], 99.50th=[ 9110], 99.90th=[13829], 99.95th=[15926], 00:09:39.611 | 99.99th=[16450] 00:09:39.611 write: IOPS=1330, BW=166MiB/s (174MB/s)(1036MiB/6230msec); 0 zone resets 00:09:39.611 slat (usec): min=29, max=9367, avg=90.39, stdev=271.08 00:09:39.611 clat (usec): min=582, max=27545, avg=5812.93, stdev=2247.32 00:09:39.611 lat (usec): min=704, max=27685, avg=5903.33, stdev=2250.06 00:09:39.611 clat percentiles (usec): 00:09:39.611 | 1.00th=[ 2180], 5.00th=[ 3097], 10.00th=[ 3621], 20.00th=[ 4228], 00:09:39.611 | 30.00th=[ 4686], 40.00th=[ 5080], 50.00th=[ 5538], 60.00th=[ 5932], 00:09:39.611 | 70.00th=[ 6259], 80.00th=[ 6849], 90.00th=[ 8291], 95.00th=[10290], 00:09:39.611 | 99.00th=[13829], 99.50th=[14484], 99.90th=[27132], 99.95th=[27395], 00:09:39.611 | 99.99th=[27657] 00:09:39.611 bw ( KiB/s): min=81920, max=122880, per=15.90%, avg=105612.89, stdev=13786.13, samples=19 00:09:39.611 iops : min= 640, max= 960, avg=825.05, stdev=107.73, samples=19 00:09:39.611 lat (usec) : 250=0.01%, 500=0.02%, 750=0.05%, 1000=0.12% 00:09:39.611 lat (msec) : 2=3.57%, 4=39.57%, 10=53.68%, 20=2.88%, 50=0.10% 00:09:39.611 cpu : usr=7.11%, sys=3.47%, ctx=12945, majf=0, minf=1 00:09:39.611 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.611 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.611 issued rwts: total=8160,8290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.611 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:39.611 job2: (groupid=0, jobs=1): err= 0: pid=67635: Wed Jul 24 19:48:08 2024 00:09:39.611 read: IOPS=830, BW=104MiB/s (109MB/s)(1019MiB/9816msec) 00:09:39.611 slat (usec): min=6, max=3930, avg=21.73, stdev=105.68 00:09:39.611 clat (usec): min=299, max=17262, avg=3638.01, stdev=1421.59 00:09:39.611 lat (usec): min=407, max=17272, avg=3659.75, stdev=1421.44 00:09:39.611 clat percentiles (usec): 00:09:39.611 | 1.00th=[ 1287], 5.00th=[ 1729], 10.00th=[ 2245], 20.00th=[ 2704], 00:09:39.611 | 30.00th=[ 2966], 40.00th=[ 3195], 50.00th=[ 3392], 60.00th=[ 3621], 00:09:39.611 | 70.00th=[ 3949], 80.00th=[ 4424], 90.00th=[ 5276], 95.00th=[ 6259], 00:09:39.611 | 99.00th=[ 8356], 99.50th=[ 9241], 99.90th=[13829], 99.95th=[17171], 00:09:39.611 | 99.99th=[17171] 00:09:39.611 write: IOPS=1322, BW=165MiB/s (173MB/s)(1020MiB/6170msec); 0 zone resets 00:09:39.611 slat (usec): min=32, max=7183, avg=88.20, stdev=252.26 00:09:39.611 clat (usec): min=1367, max=27710, avg=5857.32, stdev=2027.35 00:09:39.611 lat (usec): min=1476, max=27749, avg=5945.52, stdev=2036.61 00:09:39.611 clat percentiles (usec): 00:09:39.611 | 1.00th=[ 2474], 5.00th=[ 3392], 10.00th=[ 3818], 20.00th=[ 4424], 00:09:39.611 | 30.00th=[ 4817], 40.00th=[ 5211], 50.00th=[ 5669], 60.00th=[ 5997], 00:09:39.611 | 70.00th=[ 6325], 80.00th=[ 6915], 90.00th=[ 8094], 95.00th=[ 9503], 00:09:39.611 | 99.00th=[11994], 99.50th=[14484], 99.90th=[24249], 99.95th=[27657], 00:09:39.611 | 99.99th=[27657] 00:09:39.611 bw ( KiB/s): min=81920, max=122880, per=15.75%, avg=104648.95, stdev=12516.21, samples=19 00:09:39.611 iops : min= 640, max= 960, avg=817.53, stdev=97.74, samples=19 00:09:39.611 lat (usec) : 500=0.04%, 750=0.02%, 1000=0.08% 00:09:39.611 lat (msec) : 2=3.78%, 4=37.93%, 10=55.95%, 20=2.13%, 50=0.08% 00:09:39.611 cpu : usr=7.22%, sys=3.06%, ctx=13089, majf=0, minf=1 00:09:39.611 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.611 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.611 issued rwts: total=8151,8160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.611 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:39.611 job3: (groupid=0, jobs=1): err= 0: pid=67642: Wed Jul 24 19:48:08 2024 00:09:39.611 read: IOPS=830, BW=104MiB/s (109MB/s)(1020MiB/9820msec) 00:09:39.611 slat (usec): min=5, max=2204, avg=19.07, stdev=55.35 00:09:39.611 clat (usec): min=257, max=14745, avg=3654.93, stdev=1288.04 00:09:39.611 lat (usec): min=630, max=14756, avg=3674.01, stdev=1284.38 00:09:39.611 clat percentiles (usec): 00:09:39.611 | 1.00th=[ 1319], 5.00th=[ 1909], 10.00th=[ 2343], 20.00th=[ 2737], 00:09:39.611 | 30.00th=[ 3032], 40.00th=[ 3261], 50.00th=[ 3490], 60.00th=[ 3720], 00:09:39.611 | 70.00th=[ 4015], 80.00th=[ 4424], 90.00th=[ 5145], 95.00th=[ 5932], 00:09:39.611 | 99.00th=[ 7963], 99.50th=[ 8586], 99.90th=[11207], 99.95th=[14746], 00:09:39.611 | 99.99th=[14746] 00:09:39.611 write: IOPS=1341, BW=168MiB/s (176MB/s)(1031MiB/6152msec); 0 zone resets 00:09:39.611 slat (usec): min=28, max=5347, avg=89.63, stdev=231.00 00:09:39.611 clat (usec): min=658, max=27722, avg=5791.31, stdev=2168.89 00:09:39.611 lat (usec): min=722, max=27761, avg=5880.94, stdev=2171.45 00:09:39.611 clat percentiles (usec): 00:09:39.611 | 1.00th=[ 2147], 5.00th=[ 3130], 10.00th=[ 3556], 20.00th=[ 4228], 00:09:39.611 | 30.00th=[ 4752], 40.00th=[ 5145], 50.00th=[ 5538], 60.00th=[ 5932], 00:09:39.611 | 70.00th=[ 6259], 80.00th=[ 6849], 90.00th=[ 8160], 95.00th=[10028], 00:09:39.611 | 99.00th=[12125], 99.50th=[14353], 99.90th=[25560], 99.95th=[27395], 00:09:39.611 | 99.99th=[27657] 00:09:39.611 bw ( KiB/s): min=81920, max=122880, per=15.90%, avg=105614.05, stdev=11140.96, samples=19 00:09:39.611 iops : min= 640, max= 960, avg=825.05, stdev=87.11, samples=19 00:09:39.611 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.10% 00:09:39.612 lat (msec) : 2=3.07%, 4=39.74%, 10=54.34%, 20=2.58%, 50=0.11% 00:09:39.612 cpu : usr=7.31%, sys=3.18%, ctx=12845, majf=0, minf=1 00:09:39.612 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.612 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.612 issued rwts: total=8160,8250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.612 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:39.612 00:09:39.612 Run status group 0 (all jobs): 00:09:39.612 READ: bw=416MiB/s (437MB/s), 104MiB/s-105MiB/s (109MB/s-110MB/s), io=4089MiB (4287MB), run=9779-9820msec 00:09:39.612 WRITE: bw=649MiB/s (680MB/s), 163MiB/s-168MiB/s (171MB/s-176MB/s), io=4128MiB (4328MB), run=6152-6362msec 00:09:39.612 00:09:39.612 Disk stats (read/write): 00:09:39.612 sdb: ios=9652/8207, merge=0/0, ticks=29884/45510, in_queue=75395, util=97.34% 00:09:39.612 sdd: ios=9661/8160, merge=0/0, ticks=31258/44326, in_queue=75584, util=97.51% 00:09:39.612 sda: ios=9508/8047, merge=0/0, ticks=31167/44600, in_queue=75767, util=97.01% 00:09:39.612 sdc: ios=9442/8160, merge=0/0, ticks=31592/44490, in_queue=76083, util=97.70% 00:09:39.612 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@78 -- # timing_exit fio 00:09:39.612 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:39.612 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@80 -- # rm -f ./local-job0-0-verify.state 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@83 -- # rm -f 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@84 -- # iscsicleanup 00:09:39.871 Cleaning up iSCSI connection 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:09:39.871 Logging out of session [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:09:39.871 Logging out of session [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:39.871 Logout of [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:09:39.871 Logout of [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@985 -- # rm -rf 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@85 -- # killprocess 67347 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@950 -- # '[' -z 67347 ']' 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # kill -0 67347 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@955 -- # uname 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67347 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67347' 00:09:39.871 killing process with pid 67347 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@969 -- # kill 67347 00:09:39.871 19:48:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@974 -- # wait 67347 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@86 -- # iscsitestfini 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:09:40.809 00:09:40.809 real 0m19.167s 00:09:40.809 user 1m12.565s 00:09:40.809 sys 0m8.640s 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:40.809 ************************************ 00:09:40.809 END TEST iscsi_tgt_iscsi_lvol 00:09:40.809 ************************************ 00:09:40.809 19:48:09 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@37 -- # run_test iscsi_tgt_fio /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:09:40.809 19:48:09 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.809 19:48:09 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.809 19:48:09 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:40.809 ************************************ 00:09:40.809 START TEST iscsi_tgt_fio 00:09:40.809 ************************************ 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:09:40.809 * Looking for test storage... 00:09:40.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@11 -- # iscsitestinit 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@48 -- # '[' -z 10.0.0.1 ']' 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@53 -- # '[' -z 10.0.0.2 ']' 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@58 -- # MALLOC_BDEV_SIZE=64 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@59 -- # MALLOC_BLOCK_SIZE=4096 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@60 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@61 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@63 -- # timing_enter start_iscsi_tgt 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@66 -- # pid=68765 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@65 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@67 -- # echo 'Process pid: 68765' 00:09:40.809 Process pid: 68765 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@69 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@71 -- # waitforlisten 68765 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@831 -- # '[' -z 68765 ']' 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.809 19:48:09 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:09:41.069 [2024-07-24 19:48:09.486745] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:09:41.069 [2024-07-24 19:48:09.486850] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68765 ] 00:09:41.069 [2024-07-24 19:48:09.629525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.379 [2024-07-24 19:48:09.785448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.983 19:48:10 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.983 19:48:10 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@864 -- # return 0 00:09:41.983 19:48:10 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:09:42.243 [2024-07-24 19:48:10.774391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:42.501 iscsi_tgt is listening. Running tests... 00:09:42.501 19:48:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@75 -- # echo 'iscsi_tgt is listening. Running tests...' 00:09:42.501 19:48:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@77 -- # timing_exit start_iscsi_tgt 00:09:42.501 19:48:11 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.501 19:48:11 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:09:42.501 19:48:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:09:42.759 19:48:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:09:43.018 19:48:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:09:43.277 19:48:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # malloc_bdevs='Malloc0 ' 00:09:43.277 19:48:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:09:43.536 19:48:12 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # malloc_bdevs+=Malloc1 00:09:43.536 19:48:12 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:43.795 19:48:12 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 1024 512 00:09:44.392 19:48:13 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # bdev=Malloc2 00:09:44.393 19:48:13 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias 'raid0:0 Malloc2:1' 1:2 64 -d 00:09:44.650 19:48:13 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@91 -- # sleep 1 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@93 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:09:46.022 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@94 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:09:46.022 [2024-07-24 19:48:14.344742] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:46.022 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:09:46.022 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@95 -- # waitforiscsidevices 2 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@116 -- # local num=2 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:46.022 [2024-07-24 19:48:14.361621] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # n=2 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@120 -- # '[' 2 -ne 2 ']' 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@123 -- # return 0 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@97 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; delete_tmp_files; exit 1' SIGINT SIGTERM EXIT 00:09:46.022 19:48:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:09:46.022 [global] 00:09:46.022 thread=1 00:09:46.022 invalidate=1 00:09:46.022 rw=randrw 00:09:46.022 time_based=1 00:09:46.022 runtime=1 00:09:46.022 ioengine=libaio 00:09:46.022 direct=1 00:09:46.022 bs=4096 00:09:46.022 iodepth=1 00:09:46.022 norandommap=0 00:09:46.022 numjobs=1 00:09:46.022 00:09:46.022 verify_dump=1 00:09:46.022 verify_backlog=512 00:09:46.022 verify_state_save=0 00:09:46.022 do_verify=1 00:09:46.023 verify=crc32c-intel 00:09:46.023 [job0] 00:09:46.023 filename=/dev/sda 00:09:46.023 [job1] 00:09:46.023 filename=/dev/sdb 00:09:46.023 queue_depth set to 113 (sda) 00:09:46.023 queue_depth set to 113 (sdb) 00:09:46.023 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.023 job1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.023 fio-3.35 00:09:46.023 Starting 2 threads 00:09:46.023 [2024-07-24 19:48:14.603898] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:46.023 [2024-07-24 19:48:14.607821] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:47.398 [2024-07-24 19:48:15.721872] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:47.398 [2024-07-24 19:48:15.725318] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:47.398 00:09:47.398 job0: (groupid=0, jobs=1): err= 0: pid=68911: Wed Jul 24 19:48:15 2024 00:09:47.398 read: IOPS=6219, BW=24.3MiB/s (25.5MB/s)(24.3MiB/1001msec) 00:09:47.398 slat (nsec): min=3243, max=71051, avg=7749.65, stdev=2799.75 00:09:47.398 clat (usec): min=66, max=3295, avg=95.62, stdev=43.05 00:09:47.398 lat (usec): min=76, max=3328, avg=103.37, stdev=43.71 00:09:47.398 clat percentiles (usec): 00:09:47.398 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 87], 00:09:47.398 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:09:47.398 | 70.00th=[ 97], 80.00th=[ 101], 90.00th=[ 109], 95.00th=[ 116], 00:09:47.398 | 99.00th=[ 137], 99.50th=[ 145], 99.90th=[ 262], 99.95th=[ 400], 00:09:47.398 | 99.99th=[ 3294] 00:09:47.398 bw ( KiB/s): min=12688, max=12688, per=25.33%, avg=12688.00, stdev= 0.00, samples=1 00:09:47.398 iops : min= 3172, max= 3172, avg=3172.00, stdev= 0.00, samples=1 00:09:47.398 write: IOPS=3286, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1001msec); 0 zone resets 00:09:47.398 slat (nsec): min=4252, max=39138, avg=9068.79, stdev=3119.18 00:09:47.398 clat (usec): min=66, max=1564, avg=96.12, stdev=29.54 00:09:47.398 lat (usec): min=74, max=1573, avg=105.19, stdev=30.02 00:09:47.398 clat percentiles (usec): 00:09:47.398 | 1.00th=[ 76], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 88], 00:09:47.398 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 96], 00:09:47.398 | 70.00th=[ 99], 80.00th=[ 102], 90.00th=[ 110], 95.00th=[ 117], 00:09:47.398 | 99.00th=[ 139], 99.50th=[ 145], 99.90th=[ 273], 99.95th=[ 490], 00:09:47.398 | 99.99th=[ 1565] 00:09:47.398 bw ( KiB/s): min=13184, max=13184, per=49.29%, avg=13184.00, stdev= 0.00, samples=1 00:09:47.398 iops : min= 3296, max= 3296, avg=3296.00, stdev= 0.00, samples=1 00:09:47.398 lat (usec) : 100=76.54%, 250=23.30%, 500=0.13%, 750=0.01% 00:09:47.398 lat (msec) : 2=0.01%, 4=0.01% 00:09:47.398 cpu : usr=4.80%, sys=10.20%, ctx=9516, majf=0, minf=5 00:09:47.398 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.398 issued rwts: total=6226,3290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.398 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.398 job1: (groupid=0, jobs=1): err= 0: pid=68912: Wed Jul 24 19:48:15 2024 00:09:47.398 read: IOPS=6300, BW=24.6MiB/s (25.8MB/s)(24.6MiB/1001msec) 00:09:47.398 slat (nsec): min=3288, max=56689, avg=6540.99, stdev=2354.26 00:09:47.398 clat (usec): min=46, max=483, avg=93.41, stdev=14.00 00:09:47.398 lat (usec): min=54, max=490, avg=99.95, stdev=14.40 00:09:47.398 clat percentiles (usec): 00:09:47.398 | 1.00th=[ 57], 5.00th=[ 82], 10.00th=[ 86], 20.00th=[ 88], 00:09:47.398 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 94], 00:09:47.398 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 105], 95.00th=[ 112], 00:09:47.398 | 99.00th=[ 129], 99.50th=[ 139], 99.90th=[ 172], 99.95th=[ 412], 00:09:47.398 | 99.99th=[ 486] 00:09:47.398 bw ( KiB/s): min=12552, max=12552, per=25.06%, avg=12552.00, stdev= 0.00, samples=1 00:09:47.399 iops : min= 3138, max= 3138, avg=3138.00, stdev= 0.00, samples=1 00:09:47.399 write: IOPS=3399, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1001msec); 0 zone resets 00:09:47.399 slat (nsec): min=4062, max=38251, avg=7625.37, stdev=2774.77 00:09:47.399 clat (usec): min=55, max=781, avg=98.33, stdev=17.48 00:09:47.399 lat (usec): min=61, max=793, avg=105.95, stdev=17.92 00:09:47.399 clat percentiles (usec): 00:09:47.399 | 1.00th=[ 64], 5.00th=[ 85], 10.00th=[ 88], 20.00th=[ 90], 00:09:47.399 | 30.00th=[ 92], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 99], 00:09:47.399 | 70.00th=[ 102], 80.00th=[ 105], 90.00th=[ 113], 95.00th=[ 120], 00:09:47.399 | 99.00th=[ 137], 99.50th=[ 145], 99.90th=[ 277], 99.95th=[ 289], 00:09:47.399 | 99.99th=[ 783] 00:09:47.399 bw ( KiB/s): min=13376, max=13376, per=50.01%, avg=13376.00, stdev= 0.00, samples=1 00:09:47.399 iops : min= 3344, max= 3344, avg=3344.00, stdev= 0.00, samples=1 00:09:47.399 lat (usec) : 50=0.02%, 100=76.73%, 250=23.16%, 500=0.08%, 1000=0.01% 00:09:47.399 cpu : usr=3.20%, sys=10.10%, ctx=9710, majf=0, minf=9 00:09:47.399 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.399 issued rwts: total=6307,3403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.399 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.399 00:09:47.399 Run status group 0 (all jobs): 00:09:47.399 READ: bw=48.9MiB/s (51.3MB/s), 24.3MiB/s-24.6MiB/s (25.5MB/s-25.8MB/s), io=49.0MiB (51.3MB), run=1001-1001msec 00:09:47.399 WRITE: bw=26.1MiB/s (27.4MB/s), 12.8MiB/s-13.3MiB/s (13.5MB/s-13.9MB/s), io=26.1MiB (27.4MB), run=1001-1001msec 00:09:47.399 00:09:47.399 Disk stats (read/write): 00:09:47.399 sda: ios=5445/2937, merge=0/0, ticks=517/274, in_queue=792, util=90.05% 00:09:47.399 sdb: ios=5514/3058, merge=0/0, ticks=510/294, in_queue=804, util=90.72% 00:09:47.399 19:48:15 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:09:47.399 [global] 00:09:47.399 thread=1 00:09:47.399 invalidate=1 00:09:47.399 rw=randrw 00:09:47.399 time_based=1 00:09:47.399 runtime=1 00:09:47.399 ioengine=libaio 00:09:47.399 direct=1 00:09:47.399 bs=131072 00:09:47.399 iodepth=32 00:09:47.399 norandommap=0 00:09:47.399 numjobs=1 00:09:47.399 00:09:47.399 verify_dump=1 00:09:47.399 verify_backlog=512 00:09:47.399 verify_state_save=0 00:09:47.399 do_verify=1 00:09:47.399 verify=crc32c-intel 00:09:47.399 [job0] 00:09:47.399 filename=/dev/sda 00:09:47.399 [job1] 00:09:47.399 filename=/dev/sdb 00:09:47.399 queue_depth set to 113 (sda) 00:09:47.399 queue_depth set to 113 (sdb) 00:09:47.399 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:09:47.399 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:09:47.399 fio-3.35 00:09:47.399 Starting 2 threads 00:09:47.399 [2024-07-24 19:48:15.978163] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:47.399 [2024-07-24 19:48:15.981533] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:48.333 [2024-07-24 19:48:16.859850] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:48.591 [2024-07-24 19:48:17.117801] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:48.591 [2024-07-24 19:48:17.121627] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:48.591 00:09:48.591 job0: (groupid=0, jobs=1): err= 0: pid=68974: Wed Jul 24 19:48:17 2024 00:09:48.591 read: IOPS=1743, BW=218MiB/s (228MB/s)(222MiB/1020msec) 00:09:48.591 slat (usec): min=8, max=6194, avg=22.99, stdev=149.50 00:09:48.591 clat (usec): min=485, max=25865, avg=5775.45, stdev=4844.22 00:09:48.591 lat (usec): min=1038, max=25901, avg=5798.44, stdev=4842.11 00:09:48.591 clat percentiles (usec): 00:09:48.591 | 1.00th=[ 1156], 5.00th=[ 1287], 10.00th=[ 1369], 20.00th=[ 1516], 00:09:48.591 | 30.00th=[ 1713], 40.00th=[ 2966], 50.00th=[ 5211], 60.00th=[ 6259], 00:09:48.591 | 70.00th=[ 7177], 80.00th=[ 8094], 90.00th=[13304], 95.00th=[16450], 00:09:48.591 | 99.00th=[21365], 99.50th=[21890], 99.90th=[23987], 99.95th=[25822], 00:09:48.591 | 99.99th=[25822] 00:09:48.591 bw ( KiB/s): min=121880, max=133120, per=34.91%, avg=127500.00, stdev=7947.88, samples=2 00:09:48.591 iops : min= 952, max= 1040, avg=996.00, stdev=62.23, samples=2 00:09:48.591 write: IOPS=1078, BW=135MiB/s (141MB/s)(131MiB/971msec); 0 zone resets 00:09:48.591 slat (usec): min=34, max=2953, avg=95.29, stdev=118.22 00:09:48.591 clat (usec): min=2779, max=38111, avg=21042.78, stdev=3725.52 00:09:48.591 lat (usec): min=2869, max=38169, avg=21138.07, stdev=3716.80 00:09:48.591 clat percentiles (usec): 00:09:48.591 | 1.00th=[ 9241], 5.00th=[16057], 10.00th=[17695], 20.00th=[18744], 00:09:48.591 | 30.00th=[19792], 40.00th=[20317], 50.00th=[20841], 60.00th=[21365], 00:09:48.591 | 70.00th=[21890], 80.00th=[22676], 90.00th=[25297], 95.00th=[27132], 00:09:48.591 | 99.00th=[33162], 99.50th=[34866], 99.90th=[37487], 99.95th=[38011], 00:09:48.591 | 99.99th=[38011] 00:09:48.591 bw ( KiB/s): min=129536, max=130549, per=45.68%, avg=130042.50, stdev=716.30, samples=2 00:09:48.591 iops : min= 1012, max= 1019, avg=1015.50, stdev= 4.95, samples=2 00:09:48.591 lat (usec) : 500=0.04%, 1000=0.11% 00:09:48.591 lat (msec) : 2=21.91%, 4=4.92%, 10=28.21%, 20=18.58%, 50=26.23% 00:09:48.591 cpu : usr=11.87%, sys=5.20%, ctx=2286, majf=0, minf=7 00:09:48.591 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=96.7%, >=64=0.0% 00:09:48.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.591 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:09:48.591 issued rwts: total=1778,1047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.591 latency : target=0, window=0, percentile=100.00%, depth=32 00:09:48.591 job1: (groupid=0, jobs=1): err= 0: pid=68977: Wed Jul 24 19:48:17 2024 00:09:48.591 read: IOPS=1111, BW=139MiB/s (146MB/s)(142MiB/1021msec) 00:09:48.591 slat (usec): min=6, max=5767, avg=24.59, stdev=172.86 00:09:48.591 clat (usec): min=838, max=28804, avg=4908.43, stdev=5121.45 00:09:48.592 lat (usec): min=846, max=28830, avg=4933.01, stdev=5129.29 00:09:48.592 clat percentiles (usec): 00:09:48.592 | 1.00th=[ 1057], 5.00th=[ 1205], 10.00th=[ 1270], 20.00th=[ 1401], 00:09:48.592 | 30.00th=[ 1500], 40.00th=[ 1598], 50.00th=[ 1795], 60.00th=[ 3523], 00:09:48.592 | 70.00th=[ 5997], 80.00th=[ 8717], 90.00th=[13042], 95.00th=[16909], 00:09:48.592 | 99.00th=[21365], 99.50th=[21627], 99.90th=[23462], 99.95th=[28705], 00:09:48.592 | 99.99th=[28705] 00:09:48.592 bw ( KiB/s): min=143808, max=146176, per=39.70%, avg=144992.00, stdev=1674.43, samples=2 00:09:48.592 iops : min= 1123, max= 1142, avg=1132.50, stdev=13.44, samples=2 00:09:48.592 write: IOPS=1198, BW=150MiB/s (157MB/s)(153MiB/1021msec); 0 zone resets 00:09:48.592 slat (usec): min=29, max=2023, avg=79.06, stdev=88.52 00:09:48.592 clat (usec): min=2705, max=40834, avg=21974.37, stdev=4682.57 00:09:48.592 lat (usec): min=2798, max=40882, avg=22053.44, stdev=4675.59 00:09:48.592 clat percentiles (usec): 00:09:48.592 | 1.00th=[ 8979], 5.00th=[16319], 10.00th=[17957], 20.00th=[19268], 00:09:48.592 | 30.00th=[20055], 40.00th=[20579], 50.00th=[21103], 60.00th=[21627], 00:09:48.592 | 70.00th=[22414], 80.00th=[24511], 90.00th=[27395], 95.00th=[31589], 00:09:48.592 | 99.00th=[38011], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:09:48.592 | 99.99th=[40633] 00:09:48.592 bw ( KiB/s): min=151968, max=152832, per=53.53%, avg=152400.00, stdev=610.94, samples=2 00:09:48.592 iops : min= 1187, max= 1194, avg=1190.50, stdev= 4.95, samples=2 00:09:48.592 lat (usec) : 1000=0.34% 00:09:48.592 lat (msec) : 2=25.73%, 4=4.03%, 10=11.53%, 20=20.98%, 50=37.39% 00:09:48.592 cpu : usr=7.55%, sys=4.51%, ctx=2116, majf=0, minf=13 00:09:48.592 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=98.7%, >=64=0.0% 00:09:48.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:09:48.592 issued rwts: total=1135,1224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.592 latency : target=0, window=0, percentile=100.00%, depth=32 00:09:48.592 00:09:48.592 Run status group 0 (all jobs): 00:09:48.592 READ: bw=357MiB/s (374MB/s), 139MiB/s-218MiB/s (146MB/s-228MB/s), io=364MiB (382MB), run=1020-1021msec 00:09:48.592 WRITE: bw=278MiB/s (292MB/s), 135MiB/s-150MiB/s (141MB/s-157MB/s), io=284MiB (298MB), run=971-1021msec 00:09:48.592 00:09:48.592 Disk stats (read/write): 00:09:48.592 sda: ios=1670/859, merge=0/0, ticks=9195/17837, in_queue=27032, util=89.57% 00:09:48.592 sdb: ios=1001/1037, merge=0/0, ticks=4603/22706, in_queue=27309, util=89.64% 00:09:48.592 19:48:17 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@101 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 524288 -d 128 -t randrw -r 1 -v 00:09:48.592 [global] 00:09:48.592 thread=1 00:09:48.592 invalidate=1 00:09:48.592 rw=randrw 00:09:48.592 time_based=1 00:09:48.592 runtime=1 00:09:48.592 ioengine=libaio 00:09:48.592 direct=1 00:09:48.592 bs=524288 00:09:48.592 iodepth=128 00:09:48.592 norandommap=0 00:09:48.592 numjobs=1 00:09:48.592 00:09:48.592 verify_dump=1 00:09:48.592 verify_backlog=512 00:09:48.592 verify_state_save=0 00:09:48.592 do_verify=1 00:09:48.592 verify=crc32c-intel 00:09:48.592 [job0] 00:09:48.592 filename=/dev/sda 00:09:48.592 [job1] 00:09:48.592 filename=/dev/sdb 00:09:48.850 queue_depth set to 113 (sda) 00:09:48.850 queue_depth set to 113 (sdb) 00:09:48.850 job0: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:09:48.850 job1: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:09:48.850 fio-3.35 00:09:48.850 Starting 2 threads 00:09:48.850 [2024-07-24 19:48:17.380199] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:48.850 [2024-07-24 19:48:17.384700] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:49.784 [2024-07-24 19:48:18.304332] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:50.042 [2024-07-24 19:48:18.657766] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:50.301 00:09:50.301 job0: (groupid=0, jobs=1): err= 0: pid=69051: Wed Jul 24 19:48:18 2024 00:09:50.301 read: IOPS=437, BW=219MiB/s (229MB/s)(244MiB/1115msec) 00:09:50.301 slat (usec): min=19, max=47624, avg=1213.07, stdev=4152.93 00:09:50.301 clat (msec): min=61, max=314, avg=171.53, stdev=64.73 00:09:50.301 lat (msec): min=63, max=314, avg=172.74, stdev=65.11 00:09:50.301 clat percentiles (msec): 00:09:50.301 | 1.00th=[ 65], 5.00th=[ 82], 10.00th=[ 94], 20.00th=[ 112], 00:09:50.301 | 30.00th=[ 126], 40.00th=[ 138], 50.00th=[ 155], 60.00th=[ 184], 00:09:50.301 | 70.00th=[ 222], 80.00th=[ 241], 90.00th=[ 271], 95.00th=[ 279], 00:09:50.301 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 313], 00:09:50.301 | 99.99th=[ 313] 00:09:50.301 bw ( KiB/s): min=60416, max=187392, per=33.73%, avg=123904.00, stdev=89785.59, samples=2 00:09:50.301 iops : min= 118, max= 366, avg=242.00, stdev=175.36, samples=2 00:09:50.301 write: IOPS=453, BW=227MiB/s (238MB/s)(135MiB/595msec); 0 zone resets 00:09:50.301 slat (usec): min=133, max=13586, avg=1204.06, stdev=2282.51 00:09:50.301 clat (msec): min=60, max=242, avg=152.32, stdev=34.78 00:09:50.301 lat (msec): min=61, max=243, avg=153.53, stdev=34.88 00:09:50.301 clat percentiles (msec): 00:09:50.301 | 1.00th=[ 65], 5.00th=[ 91], 10.00th=[ 97], 20.00th=[ 129], 00:09:50.301 | 30.00th=[ 140], 40.00th=[ 146], 50.00th=[ 155], 60.00th=[ 161], 00:09:50.301 | 70.00th=[ 165], 80.00th=[ 178], 90.00th=[ 194], 95.00th=[ 215], 00:09:50.301 | 99.00th=[ 241], 99.50th=[ 241], 99.90th=[ 243], 99.95th=[ 243], 00:09:50.301 | 99.99th=[ 243] 00:09:50.301 bw ( KiB/s): min=93184, max=183296, per=46.47%, avg=138240.00, stdev=63718.81, samples=2 00:09:50.301 iops : min= 182, max= 358, avg=270.00, stdev=124.45, samples=2 00:09:50.301 lat (msec) : 100=11.87%, 250=79.02%, 500=9.10% 00:09:50.301 cpu : usr=9.52%, sys=3.50%, ctx=389, majf=0, minf=5 00:09:50.301 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.4% 00:09:50.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.301 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:09:50.301 issued rwts: total=488,270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.301 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.301 job1: (groupid=0, jobs=1): err= 0: pid=69052: Wed Jul 24 19:48:18 2024 00:09:50.301 read: IOPS=290, BW=145MiB/s (152MB/s)(156MiB/1074msec) 00:09:50.301 slat (usec): min=27, max=24249, avg=1593.14, stdev=3367.36 00:09:50.301 clat (msec): min=72, max=367, avg=181.85, stdev=82.59 00:09:50.301 lat (msec): min=79, max=367, avg=183.45, stdev=83.06 00:09:50.301 clat percentiles (msec): 00:09:50.301 | 1.00th=[ 83], 5.00th=[ 102], 10.00th=[ 107], 20.00th=[ 114], 00:09:50.301 | 30.00th=[ 118], 40.00th=[ 127], 50.00th=[ 146], 60.00th=[ 169], 00:09:50.301 | 70.00th=[ 211], 80.00th=[ 279], 90.00th=[ 321], 95.00th=[ 334], 00:09:50.301 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 368], 00:09:50.301 | 99.99th=[ 368] 00:09:50.301 bw ( KiB/s): min=48128, max=216064, per=35.96%, avg=132096.00, stdev=118748.68, samples=2 00:09:50.301 iops : min= 94, max= 422, avg=258.00, stdev=231.93, samples=2 00:09:50.301 write: IOPS=329, BW=165MiB/s (173MB/s)(177MiB/1074msec); 0 zone resets 00:09:50.301 slat (usec): min=176, max=20622, avg=1412.62, stdev=2530.19 00:09:50.301 clat (msec): min=79, max=397, avg=196.90, stdev=83.35 00:09:50.301 lat (msec): min=80, max=398, avg=198.32, stdev=83.78 00:09:50.301 clat percentiles (msec): 00:09:50.301 | 1.00th=[ 82], 5.00th=[ 114], 10.00th=[ 122], 20.00th=[ 126], 00:09:50.301 | 30.00th=[ 134], 40.00th=[ 153], 50.00th=[ 167], 60.00th=[ 184], 00:09:50.301 | 70.00th=[ 247], 80.00th=[ 284], 90.00th=[ 338], 95.00th=[ 368], 00:09:50.301 | 99.00th=[ 380], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:09:50.301 | 99.99th=[ 397] 00:09:50.301 bw ( KiB/s): min=49152, max=237568, per=48.19%, avg=143360.00, stdev=133230.23, samples=2 00:09:50.301 iops : min= 96, max= 464, avg=280.00, stdev=260.22, samples=2 00:09:50.301 lat (msec) : 100=3.90%, 250=67.87%, 500=28.23% 00:09:50.301 cpu : usr=9.23%, sys=3.45%, ctx=311, majf=0, minf=5 00:09:50.301 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:09:50.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.301 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:09:50.301 issued rwts: total=312,354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.301 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.301 00:09:50.301 Run status group 0 (all jobs): 00:09:50.301 READ: bw=359MiB/s (376MB/s), 145MiB/s-219MiB/s (152MB/s-229MB/s), io=400MiB (419MB), run=1074-1115msec 00:09:50.301 WRITE: bw=291MiB/s (305MB/s), 165MiB/s-227MiB/s (173MB/s-238MB/s), io=312MiB (327MB), run=595-1074msec 00:09:50.301 00:09:50.301 Disk stats (read/write): 00:09:50.301 sda: ios=523/270, merge=0/0, ticks=34834/18687, in_queue=53521, util=82.09% 00:09:50.301 sdb: ios=350/341, merge=0/0, ticks=23460/32113, in_queue=55574, util=78.05% 00:09:50.301 19:48:18 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 1024 -t read -r 1 -n 4 00:09:50.301 [global] 00:09:50.301 thread=1 00:09:50.301 invalidate=1 00:09:50.301 rw=read 00:09:50.301 time_based=1 00:09:50.301 runtime=1 00:09:50.301 ioengine=libaio 00:09:50.301 direct=1 00:09:50.301 bs=1048576 00:09:50.301 iodepth=1024 00:09:50.301 norandommap=1 00:09:50.301 numjobs=4 00:09:50.301 00:09:50.301 [job0] 00:09:50.301 filename=/dev/sda 00:09:50.301 [job1] 00:09:50.301 filename=/dev/sdb 00:09:50.301 queue_depth set to 113 (sda) 00:09:50.301 queue_depth set to 113 (sdb) 00:09:50.301 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:09:50.301 ... 00:09:50.301 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:09:50.301 ... 00:09:50.301 fio-3.35 00:09:50.301 Starting 8 threads 00:09:53.582 00:09:53.582 job0: (groupid=0, jobs=1): err= 0: pid=69119: Wed Jul 24 19:48:21 2024 00:09:53.582 read: IOPS=14, BW=14.6MiB/s (15.3MB/s)(37.0MiB/2535msec) 00:09:53.582 slat (usec): min=851, max=1267.3k, avg=49817.25, stdev=208960.64 00:09:53.582 clat (msec): min=690, max=2527, avg=2275.23, stdev=293.55 00:09:53.582 lat (msec): min=1957, max=2534, avg=2325.05, stdev=125.42 00:09:53.582 clat percentiles (msec): 00:09:53.582 | 1.00th=[ 693], 5.00th=[ 1955], 10.00th=[ 2198], 20.00th=[ 2265], 00:09:53.582 | 30.00th=[ 2265], 40.00th=[ 2299], 50.00th=[ 2333], 60.00th=[ 2366], 00:09:53.582 | 70.00th=[ 2366], 80.00th=[ 2400], 90.00th=[ 2500], 95.00th=[ 2534], 00:09:53.582 | 99.00th=[ 2534], 99.50th=[ 2534], 99.90th=[ 2534], 99.95th=[ 2534], 00:09:53.582 | 99.99th=[ 2534] 00:09:53.582 lat (msec) : 750=2.70%, 2000=5.41%, >=2000=91.89% 00:09:53.582 cpu : usr=0.04%, sys=1.54%, ctx=70, majf=0, minf=9473 00:09:53.582 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:09:53.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.582 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:09:53.582 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.582 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:53.582 job0: (groupid=0, jobs=1): err= 0: pid=69120: Wed Jul 24 19:48:21 2024 00:09:53.582 read: IOPS=30, BW=30.0MiB/s (31.5MB/s)(77.0MiB/2565msec) 00:09:53.582 slat (usec): min=742, max=1247.5k, avg=24104.38, stdev=143522.28 00:09:53.582 clat (msec): min=708, max=2562, avg=2382.21, stdev=253.89 00:09:53.582 lat (msec): min=1955, max=2564, avg=2406.32, stdev=165.67 00:09:53.582 clat percentiles (msec): 00:09:53.582 | 1.00th=[ 709], 5.00th=[ 1972], 10.00th=[ 2198], 20.00th=[ 2299], 00:09:53.582 | 30.00th=[ 2366], 40.00th=[ 2433], 50.00th=[ 2433], 60.00th=[ 2500], 00:09:53.582 | 70.00th=[ 2534], 80.00th=[ 2534], 90.00th=[ 2567], 95.00th=[ 2567], 00:09:53.582 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2567], 99.95th=[ 2567], 00:09:53.582 | 99.99th=[ 2567] 00:09:53.582 lat (msec) : 750=1.30%, 2000=7.79%, >=2000=90.91% 00:09:53.582 cpu : usr=0.04%, sys=3.20%, ctx=74, majf=0, minf=19713 00:09:53.582 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.4%, 16=20.8%, 32=41.6%, >=64=18.2% 00:09:53.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.582 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:09:53.582 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.582 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:53.582 job0: (groupid=0, jobs=1): err= 0: pid=69121: Wed Jul 24 19:48:21 2024 00:09:53.582 read: IOPS=30, BW=30.3MiB/s (31.8MB/s)(78.0MiB/2574msec) 00:09:53.582 slat (usec): min=750, max=1240.4k, avg=23848.99, stdev=141695.58 00:09:53.582 clat (msec): min=713, max=2571, avg=2387.41, stdev=253.38 00:09:53.582 lat (msec): min=1953, max=2573, avg=2411.26, stdev=166.36 00:09:53.582 clat percentiles (msec): 00:09:53.582 | 1.00th=[ 718], 5.00th=[ 1972], 10.00th=[ 2198], 20.00th=[ 2265], 00:09:53.582 | 30.00th=[ 2333], 40.00th=[ 2400], 50.00th=[ 2467], 60.00th=[ 2500], 00:09:53.582 | 70.00th=[ 2534], 80.00th=[ 2567], 90.00th=[ 2567], 95.00th=[ 2567], 00:09:53.582 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2567], 99.95th=[ 2567], 00:09:53.582 | 99.99th=[ 2567] 00:09:53.582 lat (msec) : 750=1.28%, 2000=6.41%, >=2000=92.31% 00:09:53.582 cpu : usr=0.00%, sys=3.34%, ctx=101, majf=0, minf=19969 00:09:53.582 IO depths : 1=1.3%, 2=2.6%, 4=5.1%, 8=10.3%, 16=20.5%, 32=41.0%, >=64=19.2% 00:09:53.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.582 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:09:53.582 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.582 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:53.582 job0: (groupid=0, jobs=1): err= 0: pid=69122: Wed Jul 24 19:48:21 2024 00:09:53.582 read: IOPS=26, BW=26.2MiB/s (27.5MB/s)(67.0MiB/2557msec) 00:09:53.582 slat (usec): min=671, max=1243.7k, avg=27597.34, stdev=153285.06 00:09:53.582 clat (msec): min=707, max=2554, avg=2342.53, stdev=267.32 00:09:53.582 lat (msec): min=1951, max=2556, avg=2370.12, stdev=175.71 00:09:53.582 clat percentiles (msec): 00:09:53.582 | 1.00th=[ 709], 5.00th=[ 1955], 10.00th=[ 1972], 20.00th=[ 2232], 00:09:53.582 | 30.00th=[ 2299], 40.00th=[ 2333], 50.00th=[ 2400], 60.00th=[ 2467], 00:09:53.582 | 70.00th=[ 2500], 80.00th=[ 2534], 90.00th=[ 2534], 95.00th=[ 2534], 00:09:53.582 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2567], 99.95th=[ 2567], 00:09:53.582 | 99.99th=[ 2567] 00:09:53.582 lat (msec) : 750=1.49%, 2000=8.96%, >=2000=89.55% 00:09:53.582 cpu : usr=0.00%, sys=2.70%, ctx=92, majf=0, minf=17153 00:09:53.583 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=11.9%, 16=23.9%, 32=47.8%, >=64=6.0% 00:09:53.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.583 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:09:53.583 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.583 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:53.583 job1: (groupid=0, jobs=1): err= 0: pid=69123: Wed Jul 24 19:48:21 2024 00:09:53.583 read: IOPS=18, BW=18.7MiB/s (19.6MB/s)(48.0MiB/2572msec) 00:09:53.583 slat (usec): min=623, max=1248.4k, avg=38470.42, stdev=181744.22 00:09:53.583 clat (msec): min=724, max=2569, avg=2331.09, stdev=310.80 00:09:53.583 lat (msec): min=1972, max=2571, avg=2369.56, stdev=203.43 00:09:53.583 clat percentiles (msec): 00:09:53.583 | 1.00th=[ 726], 5.00th=[ 1972], 10.00th=[ 1989], 20.00th=[ 2265], 00:09:53.583 | 30.00th=[ 2265], 40.00th=[ 2333], 50.00th=[ 2366], 60.00th=[ 2467], 00:09:53.583 | 70.00th=[ 2567], 80.00th=[ 2567], 90.00th=[ 2567], 95.00th=[ 2567], 00:09:53.583 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2567], 99.95th=[ 2567], 00:09:53.583 | 99.99th=[ 2567] 00:09:53.583 lat (msec) : 750=2.08%, 2000=10.42%, >=2000=87.50% 00:09:53.583 cpu : usr=0.00%, sys=1.94%, ctx=76, majf=0, minf=12289 00:09:53.583 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:09:53.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.583 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:09:53.583 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.583 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:53.583 job1: (groupid=0, jobs=1): err= 0: pid=69124: Wed Jul 24 19:48:21 2024 00:09:53.583 read: IOPS=18, BW=18.2MiB/s (19.1MB/s)(47.0MiB/2576msec) 00:09:53.583 slat (usec): min=667, max=1240.3k, avg=39271.27, stdev=181933.54 00:09:53.583 clat (msec): min=729, max=2570, avg=2388.07, stdev=300.09 00:09:53.583 lat (msec): min=1969, max=2575, avg=2427.34, stdev=171.50 00:09:53.583 clat percentiles (msec): 00:09:53.583 | 1.00th=[ 726], 5.00th=[ 1972], 10.00th=[ 2005], 20.00th=[ 2299], 00:09:53.583 | 30.00th=[ 2366], 40.00th=[ 2400], 50.00th=[ 2500], 60.00th=[ 2534], 00:09:53.583 | 70.00th=[ 2567], 80.00th=[ 2567], 90.00th=[ 2567], 95.00th=[ 2567], 00:09:53.583 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2567], 99.95th=[ 2567], 00:09:53.583 | 99.99th=[ 2567] 00:09:53.583 lat (msec) : 750=2.13%, 2000=6.38%, >=2000=91.49% 00:09:53.583 cpu : usr=0.00%, sys=2.17%, ctx=75, majf=0, minf=12033 00:09:53.583 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:09:53.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.583 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:09:53.583 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.583 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:53.583 job1: (groupid=0, jobs=1): err= 0: pid=69125: Wed Jul 24 19:48:21 2024 00:09:53.583 read: IOPS=28, BW=28.4MiB/s (29.8MB/s)(74.0MiB/2603msec) 00:09:53.583 slat (usec): min=697, max=1248.3k, avg=25213.67, stdev=146409.84 00:09:53.583 clat (msec): min=736, max=2601, avg=2452.98, stdev=251.49 00:09:53.583 lat (msec): min=1984, max=2602, avg=2478.19, stdev=150.18 00:09:53.583 clat percentiles (msec): 00:09:53.583 | 1.00th=[ 735], 5.00th=[ 2005], 10.00th=[ 2265], 20.00th=[ 2333], 00:09:53.583 | 30.00th=[ 2433], 40.00th=[ 2500], 50.00th=[ 2567], 60.00th=[ 2567], 00:09:53.583 | 70.00th=[ 2567], 80.00th=[ 2601], 90.00th=[ 2601], 95.00th=[ 2601], 00:09:53.583 | 99.00th=[ 2601], 99.50th=[ 2601], 99.90th=[ 2601], 99.95th=[ 2601], 00:09:53.583 | 99.99th=[ 2601] 00:09:53.583 lat (msec) : 750=1.35%, 2000=2.70%, >=2000=95.95% 00:09:53.583 cpu : usr=0.08%, sys=2.88%, ctx=79, majf=0, minf=18945 00:09:53.583 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:09:53.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.583 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:09:53.583 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.583 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:53.583 job1: (groupid=0, jobs=1): err= 0: pid=69126: Wed Jul 24 19:48:21 2024 00:09:53.583 read: IOPS=14, BW=14.1MiB/s (14.8MB/s)(36.0MiB/2557msec) 00:09:53.583 slat (usec): min=934, max=1248.2k, avg=50685.85, stdev=209430.24 00:09:53.583 clat (msec): min=731, max=2555, avg=2329.14, stdev=296.58 00:09:53.583 lat (msec): min=1979, max=2556, avg=2379.83, stdev=117.78 00:09:53.583 clat percentiles (msec): 00:09:53.583 | 1.00th=[ 735], 5.00th=[ 1989], 10.00th=[ 2265], 20.00th=[ 2265], 00:09:53.583 | 30.00th=[ 2299], 40.00th=[ 2366], 50.00th=[ 2400], 60.00th=[ 2433], 00:09:53.583 | 70.00th=[ 2433], 80.00th=[ 2467], 90.00th=[ 2500], 95.00th=[ 2534], 00:09:53.583 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2567], 99.95th=[ 2567], 00:09:53.583 | 99.99th=[ 2567] 00:09:53.583 lat (msec) : 750=2.78%, 2000=2.78%, >=2000=94.44% 00:09:53.583 cpu : usr=0.04%, sys=1.49%, ctx=86, majf=0, minf=9217 00:09:53.583 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:09:53.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.583 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:09:53.583 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.583 latency : target=0, window=0, percentile=100.00%, depth=1024 00:09:53.583 00:09:53.583 Run status group 0 (all jobs): 00:09:53.583 READ: bw=178MiB/s (187MB/s), 14.1MiB/s-30.3MiB/s (14.8MB/s-31.8MB/s), io=464MiB (487MB), run=2535-2603msec 00:09:53.583 00:09:53.583 Disk stats (read/write): 00:09:53.583 sda: ios=226/0, merge=0/0, ticks=63613/0, in_queue=63613, util=94.76% 00:09:53.583 sdb: ios=158/0, merge=0/0, ticks=39331/0, in_queue=39331, util=86.19% 00:09:53.583 19:48:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@104 -- # '[' 0 -eq 1 ']' 00:09:53.583 19:48:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@116 -- # fio_pid=69155 00:09:53.583 19:48:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 128 -t rw -r 10 00:09:53.583 19:48:21 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@118 -- # sleep 3 00:09:53.583 [global] 00:09:53.583 thread=1 00:09:53.583 invalidate=1 00:09:53.583 rw=rw 00:09:53.583 time_based=1 00:09:53.583 runtime=10 00:09:53.583 ioengine=libaio 00:09:53.583 direct=1 00:09:53.583 bs=1048576 00:09:53.583 iodepth=128 00:09:53.583 norandommap=1 00:09:53.583 numjobs=1 00:09:53.583 00:09:53.583 [job0] 00:09:53.583 filename=/dev/sda 00:09:53.583 [job1] 00:09:53.583 filename=/dev/sdb 00:09:53.583 queue_depth set to 113 (sda) 00:09:53.583 queue_depth set to 113 (sdb) 00:09:53.583 job0: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:09:53.583 job1: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:09:53.583 fio-3.35 00:09:53.583 Starting 2 threads 00:09:53.583 [2024-07-24 19:48:21.972747] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:53.583 [2024-07-24 19:48:21.976732] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:56.112 19:48:24 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:56.447 [2024-07-24 19:48:25.032439] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (raid0) received event(SPDK_BDEV_EVENT_REMOVE) 00:09:56.447 [2024-07-24 19:48:25.035382] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1d 00:09:56.448 [2024-07-24 19:48:25.039004] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1d 00:09:56.448 [2024-07-24 19:48:25.041293] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1d 00:09:56.448 [2024-07-24 19:48:25.041444] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1d 00:09:56.448 [2024-07-24 19:48:25.041558] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1d 00:09:56.448 [2024-07-24 19:48:25.041676] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1d 00:09:56.448 [2024-07-24 19:48:25.041943] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1d 00:09:56.448 [2024-07-24 19:48:25.042156] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1d 00:09:56.448 [2024-07-24 19:48:25.042234] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1d 00:09:56.448 [2024-07-24 19:48:25.052625] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 [2024-07-24 19:48:25.052700] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 [2024-07-24 19:48:25.059286] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 [2024-07-24 19:48:25.064761] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 [2024-07-24 19:48:25.064831] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 [2024-07-24 19:48:25.071791] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 [2024-07-24 19:48:25.074397] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 [2024-07-24 19:48:25.076388] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 [2024-07-24 19:48:25.077619] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 [2024-07-24 19:48:25.079544] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 [2024-07-24 19:48:25.081758] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 19:48:25 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:09:56.448 19:48:25 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:56.448 [2024-07-24 19:48:25.097678] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 [2024-07-24 19:48:25.099175] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.448 [2024-07-24 19:48:25.100544] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.709 [2024-07-24 19:48:25.101684] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.709 [2024-07-24 19:48:25.103062] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1e 00:09:56.709 [2024-07-24 19:48:25.104185] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.106870] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.109782] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.114230] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.115417] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.116895] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.118286] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.119470] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.120846] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.122055] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.123767] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.125010] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.126575] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.127640] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.129057] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.130252] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f1f 00:09:56.709 [2024-07-24 19:48:25.131339] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.133859] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.140152] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.141605] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.143104] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.144865] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.146398] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.148605] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.150802] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.153068] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.155048] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.157275] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.162808] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.165066] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.166355] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.709 [2024-07-24 19:48:25.168565] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=f20 00:09:56.967 19:48:25 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:09:56.967 19:48:25 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:56.967 fio: io_u error on file /dev/sda: Input/output error: write offset=122683392, buflen=1048576 00:09:56.967 fio: io_u error on file /dev/sda: Input/output error: write offset=126877696, buflen=1048576 00:09:57.231 fio: pid=69182, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=127926272, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=128974848, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=130023424, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=131072000, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=132120576, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=123731968, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=124780544, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=125829120, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=133169152, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=99614720, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=0, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=1048576, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=100663296, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=101711872, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=102760448, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=2097152, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=103809024, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=104857600, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=3145728, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=105906176, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=4194304, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=5242880, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=106954752, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=6291456, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=108003328, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=109051904, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=7340032, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=110100480, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=111149056, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=8388608, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=112197632, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=9437184, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=113246208, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=10485760, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=114294784, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=115343360, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=116391936, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=11534336, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=12582912, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=117440512, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=118489088, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=13631488, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=14680064, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=15728640, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=16777216, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=17825792, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=119537664, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=120586240, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=18874368, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=19922944, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=20971520, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=22020096, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=23068672, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=121634816, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=24117248, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=122683392, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=25165824, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=123731968, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=26214400, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=124780544, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=125829120, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=126877696, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=27262976, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=28311552, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=29360128, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=127926272, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=128974848, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=130023424, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=131072000, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=132120576, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=30408704, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=133169152, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=0, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=1048576, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=2097152, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=3145728, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=4194304, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=5242880, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=6291456, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=31457280, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=32505856, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=7340032, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=8388608, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=9437184, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=33554432, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=10485760, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=11534336, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=12582912, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=34603008, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=13631488, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=14680064, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=15728640, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=16777216, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=35651584, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=36700160, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=17825792, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=18874368, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=37748736, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=38797312, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=39845888, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=40894464, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=19922944, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=20971520, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=41943040, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=42991616, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=44040192, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=45088768, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: write offset=46137344, buflen=1048576 00:09:57.231 fio: io_u error on file /dev/sda: Input/output error: read offset=22020096, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: write offset=47185920, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: write offset=48234496, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: write offset=49283072, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: read offset=23068672, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: read offset=24117248, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: read offset=25165824, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: read offset=26214400, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: write offset=50331648, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: write offset=51380224, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: write offset=52428800, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: write offset=53477376, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: write offset=54525952, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: read offset=27262976, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: read offset=28311552, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: read offset=29360128, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: write offset=55574528, buflen=1048576 00:09:57.232 fio: io_u error on file /dev/sda: Input/output error: read offset=30408704, buflen=1048576 00:09:57.232 19:48:25 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:57.491 [2024-07-24 19:48:26.024304] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Malloc2) received event(SPDK_BDEV_EVENT_REMOVE) 00:09:58.058 [2024-07-24 19:48:26.617419] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.619037] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.620342] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.621297] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.622263] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.623478] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.625053] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.626676] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.627970] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.629260] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.629357] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.629431] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.629493] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.629555] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100b 00:09:58.058 [2024-07-24 19:48:26.629611] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.629667] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.631304] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 19:48:26 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@131 -- # fio_status=0 00:09:58.058 19:48:26 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # wait 69155 00:09:58.058 [2024-07-24 19:48:26.639485] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.642919] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.644737] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.646359] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.648279] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.650097] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.651649] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.653331] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.655146] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.656765] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.658470] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.660390] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.662156] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100c 00:09:58.058 [2024-07-24 19:48:26.663906] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.665937] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.667676] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.669528] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.671720] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.673699] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.675537] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.677365] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.679345] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.681119] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.682887] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.684692] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.686554] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.688393] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.690161] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.691803] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100d 00:09:58.058 [2024-07-24 19:48:26.693639] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.695622] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.697555] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.699240] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.700769] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.702437] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.704046] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.705739] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.707461] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.709388] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.710983] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.712524] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.714105] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.715932] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.058 [2024-07-24 19:48:26.717478] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.059 [2024-07-24 19:48:26.719114] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=100e 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=878706688, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=879755264, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=880803840, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=881852416, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=882900992, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=883949568, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=884998144, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=886046720, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=887095296, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=888143872, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=889192448, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=890241024, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=891289600, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=892338176, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=893386752, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=894435328, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=874512384, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=875560960, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=876609536, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=877658112, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: read offset=832569344, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=895483904, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=896532480, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=897581056, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=898629632, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=899678208, buflen=1048576 00:09:58.317 fio: io_u error on file /dev/sdb: Input/output error: write offset=900726784, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=901775360, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=833617920, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=902823936, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=834666496, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=903872512, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=904921088, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=835715072, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=836763648, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=905969664, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=907018240, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=908066816, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=837812224, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=909115392, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=910163968, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=838860800, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=911212544, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=912261120, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=913309696, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=914358272, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=839909376, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=840957952, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=915406848, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=916455424, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=917504000, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=918552576, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=842006528, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=919601152, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=843055104, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=844103680, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=920649728, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=845152256, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=921698304, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=922746880, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=923795456, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=924844032, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=846200832, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=925892608, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=926941184, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=927989760, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=847249408, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=929038336, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=848297984, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=849346560, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=930086912, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=931135488, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=932184064, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=933232640, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=850395136, buflen=1048576 00:09:58.318 fio: pid=69183, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=934281216, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=851443712, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=852492288, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=935329792, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=853540864, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=854589440, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=936378368, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=855638016, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=856686592, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=857735168, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=858783744, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=937426944, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=859832320, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=938475520, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=860880896, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=939524096, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=861929472, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=940572672, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=862978048, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=941621248, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=942669824, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=864026624, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=865075200, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=943718400, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=944766976, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=945815552, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=946864128, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=947912704, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=866123776, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=867172352, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=868220928, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=948961280, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=869269504, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=870318080, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=871366656, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=950009856, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=951058432, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=952107008, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=872415232, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=953155584, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=873463808, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=954204160, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=955252736, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=956301312, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=957349888, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=874512384, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=875560960, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=958398464, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=959447040, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=876609536, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=960495616, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: write offset=961544192, buflen=1048576 00:09:58.318 fio: io_u error on file /dev/sdb: Input/output error: read offset=877658112, buflen=1048576 00:09:58.318 00:09:58.318 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=69182: Wed Jul 24 19:48:26 2024 00:09:58.318 read: IOPS=115, BW=98.0MiB/s (103MB/s)(351MiB/3581msec) 00:09:58.318 slat (usec): min=28, max=103305, avg=3718.58, stdev=8443.27 00:09:58.318 clat (msec): min=289, max=722, avg=456.80, stdev=94.84 00:09:58.318 lat (msec): min=289, max=722, avg=460.29, stdev=95.17 00:09:58.318 clat percentiles (msec): 00:09:58.318 | 1.00th=[ 292], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 363], 00:09:58.318 | 30.00th=[ 393], 40.00th=[ 435], 50.00th=[ 460], 60.00th=[ 472], 00:09:58.318 | 70.00th=[ 498], 80.00th=[ 523], 90.00th=[ 600], 95.00th=[ 634], 00:09:58.318 | 99.00th=[ 718], 99.50th=[ 718], 99.90th=[ 726], 99.95th=[ 726], 00:09:58.318 | 99.99th=[ 726] 00:09:58.318 bw ( KiB/s): min=14336, max=153600, per=39.66%, avg=102677.86, stdev=55845.65, samples=7 00:09:58.318 iops : min= 14, max= 150, avg=100.14, stdev=54.68, samples=7 00:09:58.318 write: IOPS=122, BW=104MiB/s (109MB/s)(373MiB/3581msec); 0 zone resets 00:09:58.318 slat (usec): min=95, max=109991, avg=3862.17, stdev=8281.84 00:09:58.318 clat (msec): min=346, max=763, avg=497.17, stdev=96.00 00:09:58.318 lat (msec): min=355, max=763, avg=500.63, stdev=96.13 00:09:58.318 clat percentiles (msec): 00:09:58.318 | 1.00th=[ 355], 5.00th=[ 363], 10.00th=[ 376], 20.00th=[ 414], 00:09:58.318 | 30.00th=[ 435], 40.00th=[ 468], 50.00th=[ 485], 60.00th=[ 510], 00:09:58.318 | 70.00th=[ 531], 80.00th=[ 575], 90.00th=[ 651], 95.00th=[ 676], 00:09:58.318 | 99.00th=[ 735], 99.50th=[ 760], 99.90th=[ 760], 99.95th=[ 760], 00:09:58.318 | 99.99th=[ 760] 00:09:58.319 bw ( KiB/s): min=36790, max=182272, per=46.65%, avg=127305.00, stdev=50780.71, samples=6 00:09:58.319 iops : min= 35, max= 178, avg=124.17, stdev=49.92, samples=6 00:09:58.319 lat (msec) : 500=53.87%, 750=30.87%, 1000=0.23% 00:09:58.319 cpu : usr=1.42%, sys=2.07%, ctx=386, majf=0, minf=2 00:09:58.319 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:09:58.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.319 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.319 issued rwts: total=414,438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.319 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=69183: Wed Jul 24 19:48:26 2024 00:09:58.319 read: IOPS=185, BW=175MiB/s (184MB/s)(794MiB/4529msec) 00:09:58.319 slat (usec): min=34, max=152726, avg=2340.94, stdev=7075.00 00:09:58.319 clat (msec): min=108, max=817, avg=295.25, stdev=118.61 00:09:58.319 lat (msec): min=108, max=830, avg=297.68, stdev=118.92 00:09:58.319 clat percentiles (msec): 00:09:58.319 | 1.00th=[ 136], 5.00th=[ 167], 10.00th=[ 182], 20.00th=[ 215], 00:09:58.319 | 30.00th=[ 234], 40.00th=[ 255], 50.00th=[ 275], 60.00th=[ 300], 00:09:58.319 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 393], 95.00th=[ 456], 00:09:58.319 | 99.00th=[ 802], 99.50th=[ 818], 99.90th=[ 818], 99.95th=[ 818], 00:09:58.319 | 99.99th=[ 818] 00:09:58.319 bw ( KiB/s): min=128766, max=296960, per=75.82%, avg=196272.62, stdev=49338.30, samples=8 00:09:58.319 iops : min= 125, max= 290, avg=191.50, stdev=48.34, samples=8 00:09:58.319 write: IOPS=202, BW=184MiB/s (193MB/s)(834MiB/4529msec); 0 zone resets 00:09:58.319 slat (usec): min=56, max=688583, avg=2778.71, stdev=23075.54 00:09:58.319 clat (msec): min=122, max=872, avg=338.11, stdev=120.07 00:09:58.319 lat (msec): min=122, max=873, avg=340.27, stdev=120.30 00:09:58.319 clat percentiles (msec): 00:09:58.319 | 1.00th=[ 130], 5.00th=[ 201], 10.00th=[ 228], 20.00th=[ 268], 00:09:58.319 | 30.00th=[ 279], 40.00th=[ 292], 50.00th=[ 321], 60.00th=[ 338], 00:09:58.319 | 70.00th=[ 372], 80.00th=[ 393], 90.00th=[ 439], 95.00th=[ 485], 00:09:58.319 | 99.00th=[ 860], 99.50th=[ 869], 99.90th=[ 877], 99.95th=[ 877], 00:09:58.319 | 99.99th=[ 877] 00:09:58.319 bw ( KiB/s): min=120590, max=284672, per=75.68%, avg=206519.25, stdev=54155.88, samples=8 00:09:58.319 iops : min= 117, max= 278, avg=201.50, stdev=53.13, samples=8 00:09:58.319 lat (msec) : 250=23.69%, 500=65.15%, 750=0.80%, 1000=3.08% 00:09:58.319 cpu : usr=2.56%, sys=2.96%, ctx=427, majf=0, minf=1 00:09:58.319 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:09:58.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.319 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.319 issued rwts: total=838,918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.319 00:09:58.319 Run status group 0 (all jobs): 00:09:58.319 READ: bw=253MiB/s (265MB/s), 98.0MiB/s-175MiB/s (103MB/s-184MB/s), io=1145MiB (1201MB), run=3581-4529msec 00:09:58.319 WRITE: bw=267MiB/s (279MB/s), 104MiB/s-184MiB/s (109MB/s-193MB/s), io=1207MiB (1266MB), run=3581-4529msec 00:09:58.319 00:09:58.319 Disk stats (read/write): 00:09:58.319 sda: ios=454/432, merge=0/0, ticks=81528/102465, in_queue=183993, util=80.06% 00:09:58.319 sdb: ios=842/853, merge=0/0, ticks=83944/124584, in_queue=208527, util=90.80% 00:09:58.319 iscsi hotplug test: fio failed as expected 00:09:58.319 Cleaning up iSCSI connection 00:09:58.319 19:48:26 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # fio_status=2 00:09:58.319 19:48:26 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@134 -- # '[' 2 -eq 0 ']' 00:09:58.319 19:48:26 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@138 -- # echo 'iscsi hotplug test: fio failed as expected' 00:09:58.319 19:48:26 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@141 -- # iscsicleanup 00:09:58.319 19:48:26 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:09:58.319 19:48:26 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:09:58.319 Logging out of session [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:09:58.319 Logout of [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:09:58.319 19:48:26 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:09:58.319 19:48:26 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@985 -- # rm -rf 00:09:58.319 19:48:26 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2016-06.io.spdk:Target3 00:09:58.577 19:48:27 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@144 -- # delete_tmp_files 00:09:58.577 19:48:27 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@14 -- # rm -f /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/iscsi2.json 00:09:58.577 19:48:27 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@15 -- # rm -f ./local-job0-0-verify.state 00:09:58.577 19:48:27 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@16 -- # rm -f ./local-job1-1-verify.state 00:09:58.577 19:48:27 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:09:58.577 19:48:27 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@148 -- # killprocess 68765 00:09:58.577 19:48:27 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@950 -- # '[' -z 68765 ']' 00:09:58.577 19:48:27 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # kill -0 68765 00:09:58.577 19:48:27 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@955 -- # uname 00:09:58.577 19:48:27 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.577 19:48:27 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68765 00:09:58.866 killing process with pid 68765 00:09:58.866 19:48:27 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.866 19:48:27 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.866 19:48:27 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68765' 00:09:58.866 19:48:27 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@969 -- # kill 68765 00:09:58.866 19:48:27 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@974 -- # wait 68765 00:09:59.432 19:48:27 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@150 -- # iscsitestfini 00:09:59.432 19:48:27 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:09:59.432 00:09:59.432 real 0m18.527s 00:09:59.432 user 0m17.317s 00:09:59.432 sys 0m7.095s 00:09:59.432 19:48:27 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.433 ************************************ 00:09:59.433 END TEST iscsi_tgt_fio 00:09:59.433 ************************************ 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:09:59.433 19:48:27 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@38 -- # run_test iscsi_tgt_qos /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:09:59.433 19:48:27 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:59.433 19:48:27 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.433 19:48:27 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:09:59.433 ************************************ 00:09:59.433 START TEST iscsi_tgt_qos 00:09:59.433 ************************************ 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:09:59.433 * Looking for test storage... 00:09:59.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:09:59.433 19:48:27 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@11 -- # iscsitestinit 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@44 -- # '[' -z 10.0.0.1 ']' 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@49 -- # '[' -z 10.0.0.2 ']' 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@54 -- # MALLOC_BDEV_SIZE=64 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@55 -- # MALLOC_BLOCK_SIZE=512 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@56 -- # IOPS_RESULT= 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@57 -- # BANDWIDTH_RESULT= 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@58 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@60 -- # timing_enter start_iscsi_tgt 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@63 -- # pid=69353 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@64 -- # echo 'Process pid: 69353' 00:09:59.433 Process pid: 69353 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@65 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@62 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@66 -- # waitforlisten 69353 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@831 -- # '[' -z 69353 ']' 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.433 19:48:28 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:09:59.433 [2024-07-24 19:48:28.069009] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:09:59.433 [2024-07-24 19:48:28.069112] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69353 ] 00:09:59.692 [2024-07-24 19:48:28.219768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.950 [2024-07-24 19:48:28.390692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.950 [2024-07-24 19:48:28.472001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@864 -- # return 0 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@67 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:00.514 iscsi_tgt is listening. Running tests... 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@69 -- # timing_exit start_iscsi_tgt 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@71 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@72 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:00.514 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.515 19:48:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@73 -- # rpc_cmd bdev_malloc_create 64 512 00:10:00.515 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.515 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:00.515 Malloc0 00:10:00.515 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.515 19:48:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@78 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias Malloc0:0 1:2 64 -d 00:10:00.515 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.515 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:00.515 19:48:29 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.515 19:48:29 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@79 -- # sleep 1 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@81 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:01.887 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@82 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:01.887 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:01.887 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@84 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@87 -- # run_fio Malloc0 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:01.887 [2024-07-24 19:48:30.212503] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:10:01.887 "tick_rate": 2100000000, 00:10:01.887 "ticks": 1186617044544, 00:10:01.887 "bdevs": [ 00:10:01.887 { 00:10:01.887 "name": "Malloc0", 00:10:01.887 "bytes_read": 41472, 00:10:01.887 "num_read_ops": 4, 00:10:01.887 "bytes_written": 0, 00:10:01.887 "num_write_ops": 0, 00:10:01.887 "bytes_unmapped": 0, 00:10:01.887 "num_unmap_ops": 0, 00:10:01.887 "bytes_copied": 0, 00:10:01.887 "num_copy_ops": 0, 00:10:01.887 "read_latency_ticks": 1012644, 00:10:01.887 "max_read_latency_ticks": 392854, 00:10:01.887 "min_read_latency_ticks": 40858, 00:10:01.887 "write_latency_ticks": 0, 00:10:01.887 "max_write_latency_ticks": 0, 00:10:01.887 "min_write_latency_ticks": 0, 00:10:01.887 "unmap_latency_ticks": 0, 00:10:01.887 "max_unmap_latency_ticks": 0, 00:10:01.887 "min_unmap_latency_ticks": 0, 00:10:01.887 "copy_latency_ticks": 0, 00:10:01.887 "max_copy_latency_ticks": 0, 00:10:01.887 "min_copy_latency_ticks": 0, 00:10:01.887 "io_error": {} 00:10:01.887 } 00:10:01.887 ] 00:10:01.887 }' 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=4 00:10:01.887 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:10:01.888 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=41472 00:10:01.888 19:48:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:10:01.888 [global] 00:10:01.888 thread=1 00:10:01.888 invalidate=1 00:10:01.888 rw=randread 00:10:01.888 time_based=1 00:10:01.888 runtime=5 00:10:01.888 ioengine=libaio 00:10:01.888 direct=1 00:10:01.888 bs=1024 00:10:01.888 iodepth=128 00:10:01.888 norandommap=1 00:10:01.888 numjobs=1 00:10:01.888 00:10:01.888 [job0] 00:10:01.888 filename=/dev/sda 00:10:01.888 queue_depth set to 113 (sda) 00:10:01.888 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:10:01.888 fio-3.35 00:10:01.888 Starting 1 thread 00:10:07.151 00:10:07.151 job0: (groupid=0, jobs=1): err= 0: pid=69437: Wed Jul 24 19:48:35 2024 00:10:07.151 read: IOPS=47.6k, BW=46.5MiB/s (48.8MB/s)(233MiB/5003msec) 00:10:07.151 slat (nsec): min=1907, max=1020.6k, avg=19143.35, stdev=53478.07 00:10:07.151 clat (usec): min=1206, max=7055, avg=2666.53, stdev=326.81 00:10:07.151 lat (usec): min=1213, max=7065, avg=2685.67, stdev=324.69 00:10:07.151 clat percentiles (usec): 00:10:07.151 | 1.00th=[ 2212], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2474], 00:10:07.151 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:10:07.151 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 3195], 00:10:07.151 | 99.00th=[ 3851], 99.50th=[ 4080], 99.90th=[ 6259], 99.95th=[ 6456], 00:10:07.151 | 99.99th=[ 6849] 00:10:07.151 bw ( KiB/s): min=39458, max=51382, per=100.00%, avg=47827.11, stdev=3458.67, samples=9 00:10:07.151 iops : min=39458, max=51382, avg=47827.11, stdev=3458.67, samples=9 00:10:07.151 lat (msec) : 2=0.37%, 4=99.01%, 10=0.62% 00:10:07.151 cpu : usr=8.00%, sys=21.83%, ctx=163102, majf=0, minf=32 00:10:07.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:07.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.151 issued rwts: total=238326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.151 00:10:07.151 Run status group 0 (all jobs): 00:10:07.151 READ: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=233MiB (244MB), run=5003-5003msec 00:10:07.151 00:10:07.151 Disk stats (read/write): 00:10:07.151 sda: ios=233124/0, merge=0/0, ticks=520455/0, in_queue=520455, util=98.13% 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:10:07.151 "tick_rate": 2100000000, 00:10:07.151 "ticks": 1198055224236, 00:10:07.151 "bdevs": [ 00:10:07.151 { 00:10:07.151 "name": "Malloc0", 00:10:07.151 "bytes_read": 245156352, 00:10:07.151 "num_read_ops": 238383, 00:10:07.151 "bytes_written": 0, 00:10:07.151 "num_write_ops": 0, 00:10:07.151 "bytes_unmapped": 0, 00:10:07.151 "num_unmap_ops": 0, 00:10:07.151 "bytes_copied": 0, 00:10:07.151 "num_copy_ops": 0, 00:10:07.151 "read_latency_ticks": 54455626118, 00:10:07.151 "max_read_latency_ticks": 1326410, 00:10:07.151 "min_read_latency_ticks": 10940, 00:10:07.151 "write_latency_ticks": 0, 00:10:07.151 "max_write_latency_ticks": 0, 00:10:07.151 "min_write_latency_ticks": 0, 00:10:07.151 "unmap_latency_ticks": 0, 00:10:07.151 "max_unmap_latency_ticks": 0, 00:10:07.151 "min_unmap_latency_ticks": 0, 00:10:07.151 "copy_latency_ticks": 0, 00:10:07.151 "max_copy_latency_ticks": 0, 00:10:07.151 "min_copy_latency_ticks": 0, 00:10:07.151 "io_error": {} 00:10:07.151 } 00:10:07.151 ] 00:10:07.151 }' 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=238383 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=245156352 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=47675 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=49022976 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@90 -- # IOPS_LIMIT=23837 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@91 -- # BANDWIDTH_LIMIT=24511488 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@94 -- # READ_BANDWIDTH_LIMIT=12255744 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@98 -- # IOPS_LIMIT=23000 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@99 -- # BANDWIDTH_LIMIT_MB=23 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@100 -- # BANDWIDTH_LIMIT=24117248 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@101 -- # READ_BANDWIDTH_LIMIT_MB=11 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@102 -- # READ_BANDWIDTH_LIMIT=11534336 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@105 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 23000 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@106 -- # run_fio Malloc0 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:10:07.151 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:10:07.152 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:10:07.152 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:10:07.152 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:10:07.152 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:10:07.152 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:10:07.152 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:07.152 19:48:35 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.152 19:48:35 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:07.152 19:48:35 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.152 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:10:07.152 "tick_rate": 2100000000, 00:10:07.152 "ticks": 1198324158070, 00:10:07.152 "bdevs": [ 00:10:07.152 { 00:10:07.152 "name": "Malloc0", 00:10:07.152 "bytes_read": 245156352, 00:10:07.152 "num_read_ops": 238383, 00:10:07.152 "bytes_written": 0, 00:10:07.152 "num_write_ops": 0, 00:10:07.152 "bytes_unmapped": 0, 00:10:07.152 "num_unmap_ops": 0, 00:10:07.152 "bytes_copied": 0, 00:10:07.152 "num_copy_ops": 0, 00:10:07.152 "read_latency_ticks": 54455626118, 00:10:07.152 "max_read_latency_ticks": 1326410, 00:10:07.152 "min_read_latency_ticks": 10940, 00:10:07.152 "write_latency_ticks": 0, 00:10:07.152 "max_write_latency_ticks": 0, 00:10:07.152 "min_write_latency_ticks": 0, 00:10:07.152 "unmap_latency_ticks": 0, 00:10:07.152 "max_unmap_latency_ticks": 0, 00:10:07.152 "min_unmap_latency_ticks": 0, 00:10:07.152 "copy_latency_ticks": 0, 00:10:07.152 "max_copy_latency_ticks": 0, 00:10:07.152 "min_copy_latency_ticks": 0, 00:10:07.152 "io_error": {} 00:10:07.152 } 00:10:07.152 ] 00:10:07.152 }' 00:10:07.152 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:10:07.410 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=238383 00:10:07.410 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:10:07.410 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=245156352 00:10:07.410 19:48:35 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:10:07.410 [global] 00:10:07.410 thread=1 00:10:07.410 invalidate=1 00:10:07.410 rw=randread 00:10:07.410 time_based=1 00:10:07.410 runtime=5 00:10:07.410 ioengine=libaio 00:10:07.410 direct=1 00:10:07.410 bs=1024 00:10:07.410 iodepth=128 00:10:07.410 norandommap=1 00:10:07.410 numjobs=1 00:10:07.410 00:10:07.410 [job0] 00:10:07.410 filename=/dev/sda 00:10:07.410 queue_depth set to 113 (sda) 00:10:07.410 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:10:07.410 fio-3.35 00:10:07.410 Starting 1 thread 00:10:12.676 00:10:12.676 job0: (groupid=0, jobs=1): err= 0: pid=69528: Wed Jul 24 19:48:41 2024 00:10:12.676 read: IOPS=23.0k, BW=22.5MiB/s (23.6MB/s)(112MiB/5004msec) 00:10:12.676 slat (usec): min=2, max=2758, avg=40.86, stdev=159.66 00:10:12.676 clat (usec): min=1982, max=8980, avg=5521.16, stdev=456.55 00:10:12.676 lat (usec): min=2006, max=8983, avg=5562.02, stdev=454.72 00:10:12.676 clat percentiles (usec): 00:10:12.676 | 1.00th=[ 4424], 5.00th=[ 4883], 10.00th=[ 5080], 20.00th=[ 5145], 00:10:12.676 | 30.00th=[ 5211], 40.00th=[ 5211], 50.00th=[ 5538], 60.00th=[ 5800], 00:10:12.676 | 70.00th=[ 5866], 80.00th=[ 5932], 90.00th=[ 5997], 95.00th=[ 6063], 00:10:12.676 | 99.00th=[ 6390], 99.50th=[ 6587], 99.90th=[ 6980], 99.95th=[ 7177], 00:10:12.676 | 99.99th=[ 8029] 00:10:12.676 bw ( KiB/s): min=22972, max=23048, per=100.00%, avg=23027.78, stdev=28.27, samples=9 00:10:12.676 iops : min=22972, max=23050, avg=23028.00, stdev=28.46, samples=9 00:10:12.676 lat (msec) : 2=0.01%, 4=0.20%, 10=99.80% 00:10:12.676 cpu : usr=6.30%, sys=13.39%, ctx=61648, majf=0, minf=32 00:10:12.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:12.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.677 issued rwts: total=115114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.677 00:10:12.677 Run status group 0 (all jobs): 00:10:12.677 READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=112MiB (118MB), run=5004-5004msec 00:10:12.677 00:10:12.677 Disk stats (read/write): 00:10:12.677 sda: ios=112493/0, merge=0/0, ticks=531918/0, in_queue=531918, util=98.15% 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:10:12.677 "tick_rate": 2100000000, 00:10:12.677 "ticks": 1209756964558, 00:10:12.677 "bdevs": [ 00:10:12.677 { 00:10:12.677 "name": "Malloc0", 00:10:12.677 "bytes_read": 363033088, 00:10:12.677 "num_read_ops": 353497, 00:10:12.677 "bytes_written": 0, 00:10:12.677 "num_write_ops": 0, 00:10:12.677 "bytes_unmapped": 0, 00:10:12.677 "num_unmap_ops": 0, 00:10:12.677 "bytes_copied": 0, 00:10:12.677 "num_copy_ops": 0, 00:10:12.677 "read_latency_ticks": 587314483618, 00:10:12.677 "max_read_latency_ticks": 7060828, 00:10:12.677 "min_read_latency_ticks": 10940, 00:10:12.677 "write_latency_ticks": 0, 00:10:12.677 "max_write_latency_ticks": 0, 00:10:12.677 "min_write_latency_ticks": 0, 00:10:12.677 "unmap_latency_ticks": 0, 00:10:12.677 "max_unmap_latency_ticks": 0, 00:10:12.677 "min_unmap_latency_ticks": 0, 00:10:12.677 "copy_latency_ticks": 0, 00:10:12.677 "max_copy_latency_ticks": 0, 00:10:12.677 "min_copy_latency_ticks": 0, 00:10:12.677 "io_error": {} 00:10:12.677 } 00:10:12.677 ] 00:10:12.677 }' 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=353497 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=363033088 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=23022 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=23575347 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@107 -- # verify_qos_limits 23022 23000 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=23022 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=23000 00:10:12.677 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@110 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@111 -- # run_fio Malloc0 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:10:12.937 "tick_rate": 2100000000, 00:10:12.937 "ticks": 1210042723200, 00:10:12.937 "bdevs": [ 00:10:12.937 { 00:10:12.937 "name": "Malloc0", 00:10:12.937 "bytes_read": 363033088, 00:10:12.937 "num_read_ops": 353497, 00:10:12.937 "bytes_written": 0, 00:10:12.937 "num_write_ops": 0, 00:10:12.937 "bytes_unmapped": 0, 00:10:12.937 "num_unmap_ops": 0, 00:10:12.937 "bytes_copied": 0, 00:10:12.937 "num_copy_ops": 0, 00:10:12.937 "read_latency_ticks": 587314483618, 00:10:12.937 "max_read_latency_ticks": 7060828, 00:10:12.937 "min_read_latency_ticks": 10940, 00:10:12.937 "write_latency_ticks": 0, 00:10:12.937 "max_write_latency_ticks": 0, 00:10:12.937 "min_write_latency_ticks": 0, 00:10:12.937 "unmap_latency_ticks": 0, 00:10:12.937 "max_unmap_latency_ticks": 0, 00:10:12.937 "min_unmap_latency_ticks": 0, 00:10:12.937 "copy_latency_ticks": 0, 00:10:12.937 "max_copy_latency_ticks": 0, 00:10:12.937 "min_copy_latency_ticks": 0, 00:10:12.937 "io_error": {} 00:10:12.937 } 00:10:12.937 ] 00:10:12.937 }' 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=353497 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=363033088 00:10:12.937 19:48:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:10:12.937 [global] 00:10:12.938 thread=1 00:10:12.938 invalidate=1 00:10:12.938 rw=randread 00:10:12.938 time_based=1 00:10:12.938 runtime=5 00:10:12.938 ioengine=libaio 00:10:12.938 direct=1 00:10:12.938 bs=1024 00:10:12.938 iodepth=128 00:10:12.938 norandommap=1 00:10:12.938 numjobs=1 00:10:12.938 00:10:12.938 [job0] 00:10:12.938 filename=/dev/sda 00:10:12.938 queue_depth set to 113 (sda) 00:10:13.197 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:10:13.197 fio-3.35 00:10:13.197 Starting 1 thread 00:10:18.466 00:10:18.466 job0: (groupid=0, jobs=1): err= 0: pid=69617: Wed Jul 24 19:48:46 2024 00:10:18.466 read: IOPS=41.3k, BW=40.3MiB/s (42.3MB/s)(202MiB/5004msec) 00:10:18.466 slat (nsec): min=1952, max=1023.9k, avg=22184.30, stdev=64860.82 00:10:18.466 clat (usec): min=1421, max=6673, avg=3075.09, stdev=224.27 00:10:18.466 lat (usec): min=1430, max=6683, avg=3097.27, stdev=216.30 00:10:18.466 clat percentiles (usec): 00:10:18.466 | 1.00th=[ 2573], 5.00th=[ 2737], 10.00th=[ 2835], 20.00th=[ 2900], 00:10:18.466 | 30.00th=[ 2933], 40.00th=[ 3032], 50.00th=[ 3130], 60.00th=[ 3195], 00:10:18.466 | 70.00th=[ 3195], 80.00th=[ 3228], 90.00th=[ 3261], 95.00th=[ 3326], 00:10:18.466 | 99.00th=[ 3490], 99.50th=[ 3654], 99.90th=[ 4817], 99.95th=[ 5473], 00:10:18.466 | 99.99th=[ 6259] 00:10:18.466 bw ( KiB/s): min=40064, max=43536, per=100.00%, avg=41399.56, stdev=1193.82, samples=9 00:10:18.466 iops : min=40064, max=43536, avg=41399.56, stdev=1193.82, samples=9 00:10:18.466 lat (msec) : 2=0.10%, 4=99.65%, 10=0.25% 00:10:18.466 cpu : usr=8.53%, sys=18.55%, ctx=117286, majf=0, minf=32 00:10:18.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:18.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.466 issued rwts: total=206716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.466 00:10:18.466 Run status group 0 (all jobs): 00:10:18.466 READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=202MiB (212MB), run=5004-5004msec 00:10:18.466 00:10:18.466 Disk stats (read/write): 00:10:18.466 sda: ios=201994/0, merge=0/0, ticks=530350/0, in_queue=530350, util=98.11% 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:10:18.466 "tick_rate": 2100000000, 00:10:18.466 "ticks": 1221467504290, 00:10:18.466 "bdevs": [ 00:10:18.466 { 00:10:18.466 "name": "Malloc0", 00:10:18.466 "bytes_read": 574710272, 00:10:18.466 "num_read_ops": 560213, 00:10:18.466 "bytes_written": 0, 00:10:18.466 "num_write_ops": 0, 00:10:18.466 "bytes_unmapped": 0, 00:10:18.466 "num_unmap_ops": 0, 00:10:18.466 "bytes_copied": 0, 00:10:18.466 "num_copy_ops": 0, 00:10:18.466 "read_latency_ticks": 642546587970, 00:10:18.466 "max_read_latency_ticks": 7060828, 00:10:18.466 "min_read_latency_ticks": 10940, 00:10:18.466 "write_latency_ticks": 0, 00:10:18.466 "max_write_latency_ticks": 0, 00:10:18.466 "min_write_latency_ticks": 0, 00:10:18.466 "unmap_latency_ticks": 0, 00:10:18.466 "max_unmap_latency_ticks": 0, 00:10:18.466 "min_unmap_latency_ticks": 0, 00:10:18.466 "copy_latency_ticks": 0, 00:10:18.466 "max_copy_latency_ticks": 0, 00:10:18.466 "min_copy_latency_ticks": 0, 00:10:18.466 "io_error": {} 00:10:18.466 } 00:10:18.466 ] 00:10:18.466 }' 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=560213 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=574710272 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=41343 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=42335436 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@112 -- # '[' 41343 -gt 23000 ']' 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@115 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 23000 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@116 -- # run_fio Malloc0 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:10:18.466 "tick_rate": 2100000000, 00:10:18.466 "ticks": 1221702907524, 00:10:18.466 "bdevs": [ 00:10:18.466 { 00:10:18.466 "name": "Malloc0", 00:10:18.466 "bytes_read": 574710272, 00:10:18.466 "num_read_ops": 560213, 00:10:18.466 "bytes_written": 0, 00:10:18.466 "num_write_ops": 0, 00:10:18.466 "bytes_unmapped": 0, 00:10:18.466 "num_unmap_ops": 0, 00:10:18.466 "bytes_copied": 0, 00:10:18.466 "num_copy_ops": 0, 00:10:18.466 "read_latency_ticks": 642546587970, 00:10:18.466 "max_read_latency_ticks": 7060828, 00:10:18.466 "min_read_latency_ticks": 10940, 00:10:18.466 "write_latency_ticks": 0, 00:10:18.466 "max_write_latency_ticks": 0, 00:10:18.466 "min_write_latency_ticks": 0, 00:10:18.466 "unmap_latency_ticks": 0, 00:10:18.466 "max_unmap_latency_ticks": 0, 00:10:18.466 "min_unmap_latency_ticks": 0, 00:10:18.466 "copy_latency_ticks": 0, 00:10:18.466 "max_copy_latency_ticks": 0, 00:10:18.466 "min_copy_latency_ticks": 0, 00:10:18.466 "io_error": {} 00:10:18.466 } 00:10:18.466 ] 00:10:18.466 }' 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=560213 00:10:18.466 19:48:46 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:10:18.466 19:48:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=574710272 00:10:18.466 19:48:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:10:18.466 [global] 00:10:18.466 thread=1 00:10:18.466 invalidate=1 00:10:18.466 rw=randread 00:10:18.466 time_based=1 00:10:18.466 runtime=5 00:10:18.466 ioengine=libaio 00:10:18.466 direct=1 00:10:18.466 bs=1024 00:10:18.466 iodepth=128 00:10:18.466 norandommap=1 00:10:18.466 numjobs=1 00:10:18.466 00:10:18.466 [job0] 00:10:18.466 filename=/dev/sda 00:10:18.466 queue_depth set to 113 (sda) 00:10:18.725 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:10:18.725 fio-3.35 00:10:18.725 Starting 1 thread 00:10:24.068 00:10:24.068 job0: (groupid=0, jobs=1): err= 0: pid=69708: Wed Jul 24 19:48:52 2024 00:10:24.068 read: IOPS=23.0k, BW=22.5MiB/s (23.6MB/s)(112MiB/5005msec) 00:10:24.069 slat (usec): min=2, max=1170, avg=40.84, stdev=156.90 00:10:24.069 clat (usec): min=922, max=10630, avg=5522.04, stdev=436.30 00:10:24.069 lat (usec): min=929, max=10639, avg=5562.89, stdev=434.51 00:10:24.069 clat percentiles (usec): 00:10:24.069 | 1.00th=[ 4555], 5.00th=[ 5014], 10.00th=[ 5080], 20.00th=[ 5080], 00:10:24.069 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5604], 60.00th=[ 5800], 00:10:24.069 | 70.00th=[ 5866], 80.00th=[ 5932], 90.00th=[ 5997], 95.00th=[ 6063], 00:10:24.069 | 99.00th=[ 6128], 99.50th=[ 6128], 99.90th=[ 6390], 99.95th=[ 7832], 00:10:24.069 | 99.99th=[ 9765] 00:10:24.069 bw ( KiB/s): min=23000, max=23050, per=100.00%, avg=23031.11, stdev=23.37, samples=9 00:10:24.069 iops : min=23000, max=23050, avg=23031.11, stdev=23.37, samples=9 00:10:24.069 lat (usec) : 1000=0.01% 00:10:24.069 lat (msec) : 2=0.06%, 4=0.07%, 10=99.87%, 20=0.01% 00:10:24.069 cpu : usr=6.02%, sys=13.55%, ctx=62413, majf=0, minf=32 00:10:24.069 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:24.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.069 issued rwts: total=115132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.069 00:10:24.069 Run status group 0 (all jobs): 00:10:24.069 READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=112MiB (118MB), run=5005-5005msec 00:10:24.069 00:10:24.069 Disk stats (read/write): 00:10:24.069 sda: ios=112509/0, merge=0/0, ticks=532570/0, in_queue=532570, util=98.15% 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:10:24.069 "tick_rate": 2100000000, 00:10:24.069 "ticks": 1233142277714, 00:10:24.069 "bdevs": [ 00:10:24.069 { 00:10:24.069 "name": "Malloc0", 00:10:24.069 "bytes_read": 692605440, 00:10:24.069 "num_read_ops": 675345, 00:10:24.069 "bytes_written": 0, 00:10:24.069 "num_write_ops": 0, 00:10:24.069 "bytes_unmapped": 0, 00:10:24.069 "num_unmap_ops": 0, 00:10:24.069 "bytes_copied": 0, 00:10:24.069 "num_copy_ops": 0, 00:10:24.069 "read_latency_ticks": 1204655827292, 00:10:24.069 "max_read_latency_ticks": 7060828, 00:10:24.069 "min_read_latency_ticks": 10940, 00:10:24.069 "write_latency_ticks": 0, 00:10:24.069 "max_write_latency_ticks": 0, 00:10:24.069 "min_write_latency_ticks": 0, 00:10:24.069 "unmap_latency_ticks": 0, 00:10:24.069 "max_unmap_latency_ticks": 0, 00:10:24.069 "min_unmap_latency_ticks": 0, 00:10:24.069 "copy_latency_ticks": 0, 00:10:24.069 "max_copy_latency_ticks": 0, 00:10:24.069 "min_copy_latency_ticks": 0, 00:10:24.069 "io_error": {} 00:10:24.069 } 00:10:24.069 ] 00:10:24.069 }' 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=675345 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=692605440 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=23026 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=23579033 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@117 -- # verify_qos_limits 23026 23000 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=23026 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=23000 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:10:24.069 I/O rate limiting tests successful 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@119 -- # echo 'I/O rate limiting tests successful' 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@122 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 --rw_mbytes_per_sec 23 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@123 -- # run_fio Malloc0 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:10:24.069 "tick_rate": 2100000000, 00:10:24.069 "ticks": 1233427966246, 00:10:24.069 "bdevs": [ 00:10:24.069 { 00:10:24.069 "name": "Malloc0", 00:10:24.069 "bytes_read": 692605440, 00:10:24.069 "num_read_ops": 675345, 00:10:24.069 "bytes_written": 0, 00:10:24.069 "num_write_ops": 0, 00:10:24.069 "bytes_unmapped": 0, 00:10:24.069 "num_unmap_ops": 0, 00:10:24.069 "bytes_copied": 0, 00:10:24.069 "num_copy_ops": 0, 00:10:24.069 "read_latency_ticks": 1204655827292, 00:10:24.069 "max_read_latency_ticks": 7060828, 00:10:24.069 "min_read_latency_ticks": 10940, 00:10:24.069 "write_latency_ticks": 0, 00:10:24.069 "max_write_latency_ticks": 0, 00:10:24.069 "min_write_latency_ticks": 0, 00:10:24.069 "unmap_latency_ticks": 0, 00:10:24.069 "max_unmap_latency_ticks": 0, 00:10:24.069 "min_unmap_latency_ticks": 0, 00:10:24.069 "copy_latency_ticks": 0, 00:10:24.069 "max_copy_latency_ticks": 0, 00:10:24.069 "min_copy_latency_ticks": 0, 00:10:24.069 "io_error": {} 00:10:24.069 } 00:10:24.069 ] 00:10:24.069 }' 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=675345 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=692605440 00:10:24.069 19:48:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:10:24.069 [global] 00:10:24.069 thread=1 00:10:24.069 invalidate=1 00:10:24.069 rw=randread 00:10:24.069 time_based=1 00:10:24.069 runtime=5 00:10:24.069 ioengine=libaio 00:10:24.069 direct=1 00:10:24.069 bs=1024 00:10:24.069 iodepth=128 00:10:24.069 norandommap=1 00:10:24.069 numjobs=1 00:10:24.069 00:10:24.069 [job0] 00:10:24.069 filename=/dev/sda 00:10:24.069 queue_depth set to 113 (sda) 00:10:24.328 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:10:24.328 fio-3.35 00:10:24.328 Starting 1 thread 00:10:29.672 00:10:29.672 job0: (groupid=0, jobs=1): err= 0: pid=69797: Wed Jul 24 19:48:57 2024 00:10:29.672 read: IOPS=23.6k, BW=23.0MiB/s (24.1MB/s)(115MiB/5005msec) 00:10:29.672 slat (nsec): min=1973, max=1375.9k, avg=39850.39, stdev=156828.05 00:10:29.672 clat (usec): min=2012, max=9495, avg=5392.31, stdev=439.44 00:10:29.672 lat (usec): min=2123, max=9499, avg=5432.16, stdev=444.41 00:10:29.672 clat percentiles (usec): 00:10:29.672 | 1.00th=[ 4490], 5.00th=[ 4752], 10.00th=[ 4948], 20.00th=[ 5014], 00:10:29.672 | 30.00th=[ 5080], 40.00th=[ 5145], 50.00th=[ 5211], 60.00th=[ 5407], 00:10:29.672 | 70.00th=[ 5800], 80.00th=[ 5866], 90.00th=[ 5997], 95.00th=[ 5997], 00:10:29.672 | 99.00th=[ 6259], 99.50th=[ 6325], 99.90th=[ 6587], 99.95th=[ 6915], 00:10:29.672 | 99.99th=[ 8455] 00:10:29.672 bw ( KiB/s): min=23552, max=23600, per=100.00%, avg=23574.89, stdev=20.55, samples=9 00:10:29.672 iops : min=23552, max=23600, avg=23574.89, stdev=20.55, samples=9 00:10:29.672 lat (msec) : 4=0.09%, 10=99.91% 00:10:29.672 cpu : usr=6.43%, sys=13.33%, ctx=63653, majf=0, minf=32 00:10:29.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:29.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.672 issued rwts: total=117873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.672 00:10:29.672 Run status group 0 (all jobs): 00:10:29.672 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=115MiB (121MB), run=5005-5005msec 00:10:29.672 00:10:29.672 Disk stats (read/write): 00:10:29.672 sda: ios=115121/0, merge=0/0, ticks=530917/0, in_queue=530917, util=98.09% 00:10:29.672 19:48:57 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:29.672 19:48:57 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.672 19:48:57 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:29.672 19:48:57 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.672 19:48:57 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:10:29.672 "tick_rate": 2100000000, 00:10:29.672 "ticks": 1244841982686, 00:10:29.672 "bdevs": [ 00:10:29.672 { 00:10:29.672 "name": "Malloc0", 00:10:29.672 "bytes_read": 813307392, 00:10:29.672 "num_read_ops": 793218, 00:10:29.672 "bytes_written": 0, 00:10:29.672 "num_write_ops": 0, 00:10:29.672 "bytes_unmapped": 0, 00:10:29.672 "num_unmap_ops": 0, 00:10:29.672 "bytes_copied": 0, 00:10:29.672 "num_copy_ops": 0, 00:10:29.672 "read_latency_ticks": 1742285155970, 00:10:29.672 "max_read_latency_ticks": 7060828, 00:10:29.672 "min_read_latency_ticks": 10940, 00:10:29.672 "write_latency_ticks": 0, 00:10:29.672 "max_write_latency_ticks": 0, 00:10:29.672 "min_write_latency_ticks": 0, 00:10:29.672 "unmap_latency_ticks": 0, 00:10:29.672 "max_unmap_latency_ticks": 0, 00:10:29.672 "min_unmap_latency_ticks": 0, 00:10:29.672 "copy_latency_ticks": 0, 00:10:29.672 "max_copy_latency_ticks": 0, 00:10:29.672 "min_copy_latency_ticks": 0, 00:10:29.672 "io_error": {} 00:10:29.672 } 00:10:29.672 ] 00:10:29.672 }' 00:10:29.672 19:48:57 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:10:29.672 19:48:57 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=793218 00:10:29.672 19:48:57 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=813307392 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=23574 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=24140390 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@124 -- # verify_qos_limits 24140390 24117248 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=24140390 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=24117248 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@127 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 0 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@128 -- # run_fio Malloc0 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:10:29.672 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:10:29.673 "tick_rate": 2100000000, 00:10:29.673 "ticks": 1245110957358, 00:10:29.673 "bdevs": [ 00:10:29.673 { 00:10:29.673 "name": "Malloc0", 00:10:29.673 "bytes_read": 813307392, 00:10:29.673 "num_read_ops": 793218, 00:10:29.673 "bytes_written": 0, 00:10:29.673 "num_write_ops": 0, 00:10:29.673 "bytes_unmapped": 0, 00:10:29.673 "num_unmap_ops": 0, 00:10:29.673 "bytes_copied": 0, 00:10:29.673 "num_copy_ops": 0, 00:10:29.673 "read_latency_ticks": 1742285155970, 00:10:29.673 "max_read_latency_ticks": 7060828, 00:10:29.673 "min_read_latency_ticks": 10940, 00:10:29.673 "write_latency_ticks": 0, 00:10:29.673 "max_write_latency_ticks": 0, 00:10:29.673 "min_write_latency_ticks": 0, 00:10:29.673 "unmap_latency_ticks": 0, 00:10:29.673 "max_unmap_latency_ticks": 0, 00:10:29.673 "min_unmap_latency_ticks": 0, 00:10:29.673 "copy_latency_ticks": 0, 00:10:29.673 "max_copy_latency_ticks": 0, 00:10:29.673 "min_copy_latency_ticks": 0, 00:10:29.673 "io_error": {} 00:10:29.673 } 00:10:29.673 ] 00:10:29.673 }' 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=793218 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=813307392 00:10:29.673 19:48:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:10:29.673 [global] 00:10:29.673 thread=1 00:10:29.673 invalidate=1 00:10:29.673 rw=randread 00:10:29.673 time_based=1 00:10:29.673 runtime=5 00:10:29.673 ioengine=libaio 00:10:29.673 direct=1 00:10:29.673 bs=1024 00:10:29.673 iodepth=128 00:10:29.673 norandommap=1 00:10:29.673 numjobs=1 00:10:29.673 00:10:29.673 [job0] 00:10:29.673 filename=/dev/sda 00:10:29.673 queue_depth set to 113 (sda) 00:10:29.932 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:10:29.932 fio-3.35 00:10:29.932 Starting 1 thread 00:10:35.250 00:10:35.250 job0: (groupid=0, jobs=1): err= 0: pid=69892: Wed Jul 24 19:49:03 2024 00:10:35.250 read: IOPS=43.9k, BW=42.8MiB/s (44.9MB/s)(214MiB/5003msec) 00:10:35.250 slat (nsec): min=1906, max=1951.3k, avg=21082.01, stdev=63497.62 00:10:35.250 clat (usec): min=1221, max=5256, avg=2895.28, stdev=539.83 00:10:35.250 lat (usec): min=1230, max=5264, avg=2916.37, stdev=540.19 00:10:35.250 clat percentiles (usec): 00:10:35.250 | 1.00th=[ 2245], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2540], 00:10:35.250 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:10:35.250 | 70.00th=[ 2900], 80.00th=[ 3195], 90.00th=[ 3458], 95.00th=[ 4293], 00:10:35.250 | 99.00th=[ 4817], 99.50th=[ 4817], 99.90th=[ 5014], 99.95th=[ 5080], 00:10:35.250 | 99.99th=[ 5211] 00:10:35.250 bw ( KiB/s): min=30112, max=51716, per=100.00%, avg=44236.67, stdev=6554.27, samples=9 00:10:35.250 iops : min=30112, max=51716, avg=44236.67, stdev=6554.27, samples=9 00:10:35.250 lat (msec) : 2=0.30%, 4=92.81%, 10=6.89% 00:10:35.250 cpu : usr=7.26%, sys=18.27%, ctx=149021, majf=0, minf=32 00:10:35.250 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:35.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.250 issued rwts: total=219497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.250 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.250 00:10:35.250 Run status group 0 (all jobs): 00:10:35.250 READ: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=214MiB (225MB), run=5003-5003msec 00:10:35.250 00:10:35.250 Disk stats (read/write): 00:10:35.250 sda: ios=214856/0, merge=0/0, ticks=531338/0, in_queue=531338, util=98.15% 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:10:35.250 "tick_rate": 2100000000, 00:10:35.250 "ticks": 1256562294406, 00:10:35.250 "bdevs": [ 00:10:35.250 { 00:10:35.250 "name": "Malloc0", 00:10:35.250 "bytes_read": 1038072320, 00:10:35.250 "num_read_ops": 1012715, 00:10:35.250 "bytes_written": 0, 00:10:35.250 "num_write_ops": 0, 00:10:35.250 "bytes_unmapped": 0, 00:10:35.250 "num_unmap_ops": 0, 00:10:35.250 "bytes_copied": 0, 00:10:35.250 "num_copy_ops": 0, 00:10:35.250 "read_latency_ticks": 1796267183450, 00:10:35.250 "max_read_latency_ticks": 7060828, 00:10:35.250 "min_read_latency_ticks": 10940, 00:10:35.250 "write_latency_ticks": 0, 00:10:35.250 "max_write_latency_ticks": 0, 00:10:35.250 "min_write_latency_ticks": 0, 00:10:35.250 "unmap_latency_ticks": 0, 00:10:35.250 "max_unmap_latency_ticks": 0, 00:10:35.250 "min_unmap_latency_ticks": 0, 00:10:35.250 "copy_latency_ticks": 0, 00:10:35.250 "max_copy_latency_ticks": 0, 00:10:35.250 "min_copy_latency_ticks": 0, 00:10:35.250 "io_error": {} 00:10:35.250 } 00:10:35.250 ] 00:10:35.250 }' 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=1012715 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=1038072320 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=43899 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=44952985 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@129 -- # '[' 44952985 -gt 24117248 ']' 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@132 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 23 --r_mbytes_per_sec 11 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@133 -- # run_fio Malloc0 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:10:35.250 "tick_rate": 2100000000, 00:10:35.250 "ticks": 1256789334688, 00:10:35.250 "bdevs": [ 00:10:35.250 { 00:10:35.250 "name": "Malloc0", 00:10:35.250 "bytes_read": 1038072320, 00:10:35.250 "num_read_ops": 1012715, 00:10:35.250 "bytes_written": 0, 00:10:35.250 "num_write_ops": 0, 00:10:35.250 "bytes_unmapped": 0, 00:10:35.250 "num_unmap_ops": 0, 00:10:35.250 "bytes_copied": 0, 00:10:35.250 "num_copy_ops": 0, 00:10:35.250 "read_latency_ticks": 1796267183450, 00:10:35.250 "max_read_latency_ticks": 7060828, 00:10:35.250 "min_read_latency_ticks": 10940, 00:10:35.250 "write_latency_ticks": 0, 00:10:35.250 "max_write_latency_ticks": 0, 00:10:35.250 "min_write_latency_ticks": 0, 00:10:35.250 "unmap_latency_ticks": 0, 00:10:35.250 "max_unmap_latency_ticks": 0, 00:10:35.250 "min_unmap_latency_ticks": 0, 00:10:35.250 "copy_latency_ticks": 0, 00:10:35.250 "max_copy_latency_ticks": 0, 00:10:35.250 "min_copy_latency_ticks": 0, 00:10:35.250 "io_error": {} 00:10:35.250 } 00:10:35.250 ] 00:10:35.250 }' 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=1012715 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=1038072320 00:10:35.250 19:49:03 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:10:35.250 [global] 00:10:35.250 thread=1 00:10:35.250 invalidate=1 00:10:35.250 rw=randread 00:10:35.250 time_based=1 00:10:35.250 runtime=5 00:10:35.250 ioengine=libaio 00:10:35.250 direct=1 00:10:35.250 bs=1024 00:10:35.250 iodepth=128 00:10:35.250 norandommap=1 00:10:35.250 numjobs=1 00:10:35.250 00:10:35.250 [job0] 00:10:35.250 filename=/dev/sda 00:10:35.250 queue_depth set to 113 (sda) 00:10:35.250 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:10:35.250 fio-3.35 00:10:35.250 Starting 1 thread 00:10:40.592 00:10:40.592 job0: (groupid=0, jobs=1): err= 0: pid=69977: Wed Jul 24 19:49:09 2024 00:10:40.592 read: IOPS=11.3k, BW=11.0MiB/s (11.5MB/s)(55.1MiB/5011msec) 00:10:40.592 slat (usec): min=2, max=1787, avg=85.24, stdev=245.02 00:10:40.592 clat (usec): min=3817, max=21825, avg=11278.81, stdev=548.61 00:10:40.592 lat (usec): min=3840, max=21831, avg=11364.05, stdev=571.02 00:10:40.592 clat percentiles (usec): 00:10:40.592 | 1.00th=[10421], 5.00th=[10683], 10.00th=[10945], 20.00th=[11076], 00:10:40.592 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:10:40.592 | 70.00th=[11207], 80.00th=[11863], 90.00th=[11994], 95.00th=[11994], 00:10:40.592 | 99.00th=[12125], 99.50th=[12256], 99.90th=[16712], 99.95th=[18744], 00:10:40.593 | 99.99th=[20841] 00:10:40.593 bw ( KiB/s): min=11108, max=11288, per=99.99%, avg=11258.80, stdev=57.42, samples=10 00:10:40.593 iops : min=11108, max=11288, avg=11259.00, stdev=57.49, samples=10 00:10:40.593 lat (msec) : 4=0.02%, 10=0.23%, 20=99.73%, 50=0.02% 00:10:40.593 cpu : usr=4.15%, sys=8.50%, ctx=32543, majf=0, minf=32 00:10:40.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:40.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.593 issued rwts: total=56422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.593 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.593 00:10:40.593 Run status group 0 (all jobs): 00:10:40.593 READ: bw=11.0MiB/s (11.5MB/s), 11.0MiB/s-11.0MiB/s (11.5MB/s-11.5MB/s), io=55.1MiB (57.8MB), run=5011-5011msec 00:10:40.593 00:10:40.593 Disk stats (read/write): 00:10:40.593 sda: ios=54990/0, merge=0/0, ticks=544216/0, in_queue=544216, util=98.15% 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:10:40.593 "tick_rate": 2100000000, 00:10:40.593 "ticks": 1268228065242, 00:10:40.593 "bdevs": [ 00:10:40.593 { 00:10:40.593 "name": "Malloc0", 00:10:40.593 "bytes_read": 1095848448, 00:10:40.593 "num_read_ops": 1069137, 00:10:40.593 "bytes_written": 0, 00:10:40.593 "num_write_ops": 0, 00:10:40.593 "bytes_unmapped": 0, 00:10:40.593 "num_unmap_ops": 0, 00:10:40.593 "bytes_copied": 0, 00:10:40.593 "num_copy_ops": 0, 00:10:40.593 "read_latency_ticks": 2412935598396, 00:10:40.593 "max_read_latency_ticks": 12329316, 00:10:40.593 "min_read_latency_ticks": 10940, 00:10:40.593 "write_latency_ticks": 0, 00:10:40.593 "max_write_latency_ticks": 0, 00:10:40.593 "min_write_latency_ticks": 0, 00:10:40.593 "unmap_latency_ticks": 0, 00:10:40.593 "max_unmap_latency_ticks": 0, 00:10:40.593 "min_unmap_latency_ticks": 0, 00:10:40.593 "copy_latency_ticks": 0, 00:10:40.593 "max_copy_latency_ticks": 0, 00:10:40.593 "min_copy_latency_ticks": 0, 00:10:40.593 "io_error": {} 00:10:40.593 } 00:10:40.593 ] 00:10:40.593 }' 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=1069137 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=1095848448 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=11284 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=11555225 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@134 -- # verify_qos_limits 11555225 11534336 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=11555225 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=11534336 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:10:40.593 I/O bandwidth limiting tests successful 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@136 -- # echo 'I/O bandwidth limiting tests successful' 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@138 -- # iscsicleanup 00:10:40.593 Cleaning up iSCSI connection 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:10:40.593 Logging out of session [sid: 12, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:40.593 Logout of [sid: 12, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:40.593 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@985 -- # rm -rf 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@139 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:Target1 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@141 -- # rm -f ./local-job0-0-verify.state 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@143 -- # killprocess 69353 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@950 -- # '[' -z 69353 ']' 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # kill -0 69353 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@955 -- # uname 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69353 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.852 killing process with pid 69353 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69353' 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@969 -- # kill 69353 00:10:40.852 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@974 -- # wait 69353 00:10:41.420 19:49:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@145 -- # iscsitestfini 00:10:41.420 19:49:09 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:41.420 00:10:41.420 real 0m42.049s 00:10:41.420 user 0m37.729s 00:10:41.420 sys 0m12.438s 00:10:41.420 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.420 19:49:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:10:41.420 ************************************ 00:10:41.420 END TEST iscsi_tgt_qos 00:10:41.420 ************************************ 00:10:41.420 19:49:09 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@39 -- # run_test iscsi_tgt_ip_migration /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:10:41.420 19:49:09 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:41.420 19:49:09 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.420 19:49:09 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:41.420 ************************************ 00:10:41.420 START TEST iscsi_tgt_ip_migration 00:10:41.420 ************************************ 00:10:41.420 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:10:41.679 * Looking for test storage... 00:10:41.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@11 -- # iscsitestinit 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@13 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@14 -- # pids=() 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@16 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:10:41.679 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:41.679 #define SPDK_CONFIG_H 00:10:41.679 #define SPDK_CONFIG_APPS 1 00:10:41.679 #define SPDK_CONFIG_ARCH native 00:10:41.679 #undef SPDK_CONFIG_ASAN 00:10:41.679 #undef SPDK_CONFIG_AVAHI 00:10:41.679 #undef SPDK_CONFIG_CET 00:10:41.679 #define SPDK_CONFIG_COVERAGE 1 00:10:41.679 #define SPDK_CONFIG_CROSS_PREFIX 00:10:41.679 #undef SPDK_CONFIG_CRYPTO 00:10:41.679 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:41.679 #undef SPDK_CONFIG_CUSTOMOCF 00:10:41.679 #undef SPDK_CONFIG_DAOS 00:10:41.679 #define SPDK_CONFIG_DAOS_DIR 00:10:41.679 #define SPDK_CONFIG_DEBUG 1 00:10:41.679 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:41.679 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:41.679 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:41.679 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:41.679 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:41.679 #undef SPDK_CONFIG_DPDK_UADK 00:10:41.679 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:41.679 #define SPDK_CONFIG_EXAMPLES 1 00:10:41.679 #undef SPDK_CONFIG_FC 00:10:41.679 #define SPDK_CONFIG_FC_PATH 00:10:41.679 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:41.679 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:41.679 #undef SPDK_CONFIG_FUSE 00:10:41.679 #undef SPDK_CONFIG_FUZZER 00:10:41.679 #define SPDK_CONFIG_FUZZER_LIB 00:10:41.679 #undef SPDK_CONFIG_GOLANG 00:10:41.679 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:41.679 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:41.679 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:41.679 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:41.679 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:41.679 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:41.679 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:41.679 #define SPDK_CONFIG_IDXD 1 00:10:41.679 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:41.679 #undef SPDK_CONFIG_IPSEC_MB 00:10:41.679 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:41.679 #define SPDK_CONFIG_ISAL 1 00:10:41.679 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:41.679 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:41.679 #define SPDK_CONFIG_LIBDIR 00:10:41.679 #undef SPDK_CONFIG_LTO 00:10:41.679 #define SPDK_CONFIG_MAX_LCORES 128 00:10:41.679 #define SPDK_CONFIG_NVME_CUSE 1 00:10:41.679 #undef SPDK_CONFIG_OCF 00:10:41.679 #define SPDK_CONFIG_OCF_PATH 00:10:41.679 #define SPDK_CONFIG_OPENSSL_PATH 00:10:41.679 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:41.679 #define SPDK_CONFIG_PGO_DIR 00:10:41.679 #undef SPDK_CONFIG_PGO_USE 00:10:41.679 #define SPDK_CONFIG_PREFIX /usr/local 00:10:41.679 #undef SPDK_CONFIG_RAID5F 00:10:41.679 #undef SPDK_CONFIG_RBD 00:10:41.679 #define SPDK_CONFIG_RDMA 1 00:10:41.679 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:41.679 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:41.679 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:41.679 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:41.679 #define SPDK_CONFIG_SHARED 1 00:10:41.679 #undef SPDK_CONFIG_SMA 00:10:41.679 #define SPDK_CONFIG_TESTS 1 00:10:41.679 #undef SPDK_CONFIG_TSAN 00:10:41.679 #define SPDK_CONFIG_UBLK 1 00:10:41.679 #define SPDK_CONFIG_UBSAN 1 00:10:41.679 #undef SPDK_CONFIG_UNIT_TESTS 00:10:41.679 #define SPDK_CONFIG_URING 1 00:10:41.679 #define SPDK_CONFIG_URING_PATH 00:10:41.679 #define SPDK_CONFIG_URING_ZNS 1 00:10:41.679 #undef SPDK_CONFIG_USDT 00:10:41.679 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:41.679 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:41.679 #undef SPDK_CONFIG_VFIO_USER 00:10:41.679 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:41.679 #define SPDK_CONFIG_VHOST 1 00:10:41.679 #define SPDK_CONFIG_VIRTIO 1 00:10:41.679 #undef SPDK_CONFIG_VTUNE 00:10:41.679 #define SPDK_CONFIG_VTUNE_DIR 00:10:41.679 #define SPDK_CONFIG_WERROR 1 00:10:41.680 #define SPDK_CONFIG_WPDK_DIR 00:10:41.680 #undef SPDK_CONFIG_XNVME 00:10:41.680 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@17 -- # NETMASK=127.0.0.0/24 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@18 -- # MIGRATION_ADDRESS=127.0.0.2 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@56 -- # echo 'Running ip migration tests' 00:10:41.680 Running ip migration tests 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@57 -- # timing_enter start_iscsi_tgt_0 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@58 -- # rpc_first_addr=/var/tmp/spdk0.sock 00:10:41.680 Process pid: 70113 00:10:41.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@59 -- # iscsi_tgt_start /var/tmp/spdk0.sock 1 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=70113 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 70113' 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 70113 /var/tmp/spdk0.sock 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -m 1 --wait-for-rpc 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@831 -- # '[' -z 70113 ']' 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk0.sock 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.680 19:49:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:41.680 [2024-07-24 19:49:10.193520] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:10:41.680 [2024-07-24 19:49:10.193644] [ DPDK EAL parameters: iscsi --no-shconf -c 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70113 ] 00:10:41.680 [2024-07-24 19:49:10.331637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.938 [2024-07-24 19:49:10.512273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.502 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.502 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@864 -- # return 0 00:10:42.502 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_set_options -o 30 -a 64 00:10:42.502 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.502 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:42.502 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.502 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk0.sock framework_start_init 00:10:42.503 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.503 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:42.760 [2024-07-24 19:49:11.244653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:43.019 iscsi_tgt is listening. Running tests... 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk0.sock bdev_malloc_create 64 512 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:43.019 Malloc0 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@60 -- # timing_exit start_iscsi_tgt_0 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@62 -- # timing_enter start_iscsi_tgt_1 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:43.019 Process pid: 70148 00:10:43.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@63 -- # rpc_second_addr=/var/tmp/spdk1.sock 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@64 -- # iscsi_tgt_start /var/tmp/spdk1.sock 2 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=70148 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 70148' 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 70148 /var/tmp/spdk1.sock 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -m 2 --wait-for-rpc 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@831 -- # '[' -z 70148 ']' 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk1.sock 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.019 19:49:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:43.019 [2024-07-24 19:49:11.668271] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:10:43.019 [2024-07-24 19:49:11.668710] [ DPDK EAL parameters: iscsi --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70148 ] 00:10:43.278 [2024-07-24 19:49:11.806215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.536 [2024-07-24 19:49:11.978422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.103 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.103 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@864 -- # return 0 00:10:44.103 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_set_options -o 30 -a 64 00:10:44.103 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.103 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:44.103 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.103 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk1.sock framework_start_init 00:10:44.103 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.103 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:44.103 [2024-07-24 19:49:12.657229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:44.361 iscsi_tgt is listening. Running tests... 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk1.sock bdev_malloc_create 64 512 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:44.361 Malloc0 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@65 -- # timing_exit start_iscsi_tgt_1 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:44.361 19:49:12 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:44.361 19:49:13 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@67 -- # rpc_add_target_node /var/tmp/spdk0.sock 00:10:44.361 19:49:13 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:10:44.361 19:49:13 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:10:44.361 19:49:13 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.361 19:49:13 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:44.361 19:49:13 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.619 19:49:13 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:10:44.619 19:49:13 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.619 19:49:13 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:44.619 19:49:13 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.619 19:49:13 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:10:44.619 19:49:13 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@69 -- # sleep 1 00:10:45.585 19:49:14 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@70 -- # iscsiadm -m discovery -t sendtargets -p 127.0.0.2:3260 00:10:45.585 127.0.0.2:3260,1 iqn.2016-06.io.spdk:target1 00:10:45.585 19:49:14 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@71 -- # sleep 1 00:10:46.520 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@72 -- # iscsiadm -m node --login -p 127.0.0.2:3260 00:10:46.520 Logging in to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:10:46.520 Login to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:10:46.520 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@73 -- # waitforiscsidevices 1 00:10:46.520 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@116 -- # local num=1 00:10:46.520 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:46.520 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:46.520 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:46.520 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:46.520 [2024-07-24 19:49:15.118188] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:46.520 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # n=1 00:10:46.520 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:10:46.521 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@123 -- # return 0 00:10:46.521 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 32 -t randrw -r 12 00:10:46.521 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@77 -- # fiopid=70225 00:10:46.521 19:49:15 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@78 -- # sleep 3 00:10:46.521 [global] 00:10:46.521 thread=1 00:10:46.521 invalidate=1 00:10:46.521 rw=randrw 00:10:46.521 time_based=1 00:10:46.521 runtime=12 00:10:46.521 ioengine=libaio 00:10:46.521 direct=1 00:10:46.521 bs=4096 00:10:46.521 iodepth=32 00:10:46.521 norandommap=1 00:10:46.521 numjobs=1 00:10:46.521 00:10:46.521 [job0] 00:10:46.521 filename=/dev/sda 00:10:46.521 queue_depth set to 113 (sda) 00:10:46.780 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 00:10:46.780 fio-3.35 00:10:46.780 Starting 1 thread 00:10:46.780 [2024-07-24 19:49:15.326306] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:50.064 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@80 -- # rpc_cmd -s /var/tmp/spdk0.sock spdk_kill_instance SIGTERM 00:10:50.064 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.064 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:50.064 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.064 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@81 -- # wait 70113 00:10:50.325 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@83 -- # rpc_add_target_node /var/tmp/spdk1.sock 00:10:50.325 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:10:50.325 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:10:50.325 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.325 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:50.325 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.325 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:10:50.325 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.325 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:10:50.325 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.325 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:10:50.325 19:49:18 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@85 -- # wait 70225 00:11:00.309 [2024-07-24 19:49:27.439709] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:00.309 00:11:00.309 job0: (groupid=0, jobs=1): err= 0: pid=70252: Wed Jul 24 19:49:27 2024 00:11:00.309 read: IOPS=16.8k, BW=65.7MiB/s (68.9MB/s)(789MiB/12001msec) 00:11:00.309 slat (usec): min=2, max=301, avg= 5.18, stdev= 3.77 00:11:00.309 clat (usec): min=221, max=2007.4k, avg=969.61, stdev=18413.47 00:11:00.309 lat (usec): min=288, max=2007.4k, avg=974.79, stdev=18413.53 00:11:00.309 clat percentiles (usec): 00:11:00.309 | 1.00th=[ 506], 5.00th=[ 586], 10.00th=[ 660], 20.00th=[ 709], 00:11:00.309 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 791], 60.00th=[ 816], 00:11:00.309 | 70.00th=[ 848], 80.00th=[ 889], 90.00th=[ 971], 95.00th=[ 1037], 00:11:00.309 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1418], 99.95th=[ 2114], 00:11:00.309 | 99.99th=[ 4817] 00:11:00.309 bw ( KiB/s): min=32008, max=83728, per=100.00%, avg=76934.80, stdev=13425.09, samples=20 00:11:00.309 iops : min= 8002, max=20932, avg=19233.70, stdev=3356.27, samples=20 00:11:00.309 write: IOPS=16.8k, BW=65.7MiB/s (68.9MB/s)(788MiB/12001msec); 0 zone resets 00:11:00.309 slat (usec): min=2, max=129, avg= 5.21, stdev= 3.75 00:11:00.309 clat (usec): min=259, max=2007.3k, avg=921.63, stdev=17298.59 00:11:00.309 lat (usec): min=278, max=2007.3k, avg=926.84, stdev=17298.65 00:11:00.309 clat percentiles (usec): 00:11:00.309 | 1.00th=[ 482], 5.00th=[ 586], 10.00th=[ 635], 20.00th=[ 676], 00:11:00.309 | 30.00th=[ 701], 40.00th=[ 725], 50.00th=[ 750], 60.00th=[ 783], 00:11:00.309 | 70.00th=[ 824], 80.00th=[ 873], 90.00th=[ 955], 95.00th=[ 1012], 00:11:00.309 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1369], 99.95th=[ 2073], 00:11:00.309 | 99.99th=[ 4686] 00:11:00.309 bw ( KiB/s): min=32056, max=83904, per=100.00%, avg=76821.55, stdev=13241.43, samples=20 00:11:00.309 iops : min= 8014, max=20976, avg=19205.35, stdev=3310.35, samples=20 00:11:00.309 lat (usec) : 250=0.01%, 500=1.20%, 750=40.69%, 1000=51.20% 00:11:00.309 lat (msec) : 2=6.85%, 4=0.03%, 10=0.02%, >=2000=0.01% 00:11:00.309 cpu : usr=8.12%, sys=16.11%, ctx=30116, majf=0, minf=1 00:11:00.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 00:11:00.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:00.309 issued rwts: total=201857,201786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.309 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:00.309 00:11:00.309 Run status group 0 (all jobs): 00:11:00.309 READ: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=789MiB (827MB), run=12001-12001msec 00:11:00.309 WRITE: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=788MiB (827MB), run=12001-12001msec 00:11:00.309 00:11:00.309 Disk stats (read/write): 00:11:00.309 sda: ios=199510/199347, merge=0/0, ticks=178380/174626, in_queue=353007, util=99.35% 00:11:00.309 19:49:27 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@87 -- # trap - SIGINT SIGTERM EXIT 00:11:00.309 Cleaning up iSCSI connection 00:11:00.309 19:49:27 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@89 -- # iscsicleanup 00:11:00.309 19:49:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:11:00.309 19:49:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:11:00.309 Logging out of session [sid: 13, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:11:00.309 Logout of [sid: 13, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:11:00.309 19:49:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:11:00.309 19:49:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@985 -- # rm -rf 00:11:00.309 19:49:27 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@91 -- # rpc_cmd -s /var/tmp/spdk1.sock spdk_kill_instance SIGTERM 00:11:00.309 19:49:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.309 19:49:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:00.309 19:49:27 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.309 19:49:27 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@92 -- # wait 70148 00:11:00.309 19:49:28 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@93 -- # iscsitestfini 00:11:00.309 19:49:28 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:00.309 00:11:00.309 real 0m18.167s 00:11:00.309 user 0m23.487s 00:11:00.309 sys 0m4.904s 00:11:00.309 ************************************ 00:11:00.309 END TEST iscsi_tgt_ip_migration 00:11:00.309 ************************************ 00:11:00.309 19:49:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.309 19:49:28 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:11:00.309 19:49:28 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@40 -- # run_test iscsi_tgt_trace_record /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:11:00.309 19:49:28 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:00.309 19:49:28 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.309 19:49:28 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:00.310 ************************************ 00:11:00.310 START TEST iscsi_tgt_trace_record 00:11:00.310 ************************************ 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:11:00.310 * Looking for test storage... 00:11:00.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@11 -- # iscsitestinit 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@13 -- # TRACE_TMP_FOLDER=./tmp-trace 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@14 -- # TRACE_RECORD_OUTPUT=./tmp-trace/record.trace 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@15 -- # TRACE_RECORD_NOTICE_LOG=./tmp-trace/record.notice 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@16 -- # TRACE_TOOL_LOG=./tmp-trace/trace.log 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@22 -- # '[' -z 10.0.0.1 ']' 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@27 -- # '[' -z 10.0.0.2 ']' 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@32 -- # NUM_TRACE_ENTRIES=4096 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@33 -- # MALLOC_BDEV_SIZE=64 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@34 -- # MALLOC_BLOCK_SIZE=4096 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@36 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@37 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@39 -- # timing_enter start_iscsi_tgt 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:11:00.310 start iscsi_tgt with trace enabled 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@41 -- # echo 'start iscsi_tgt with trace enabled' 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@43 -- # iscsi_pid=70457 00:11:00.310 Process pid: 70457 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@44 -- # echo 'Process pid: 70457' 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@42 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xf --num-trace-entries 4096 --tpoint-group all 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@46 -- # trap 'killprocess $iscsi_pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@48 -- # waitforlisten 70457 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@831 -- # '[' -z 70457 ']' 00:11:00.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.310 19:49:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:11:00.310 [2024-07-24 19:49:28.438385] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:11:00.310 [2024-07-24 19:49:28.438517] [ DPDK EAL parameters: iscsi --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70457 ] 00:11:00.310 [2024-07-24 19:49:28.591238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.310 [2024-07-24 19:49:28.762338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask all specified. 00:11:00.310 [2024-07-24 19:49:28.762411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s iscsi -p 70457' to capture a snapshot of events at runtime. 00:11:00.310 [2024-07-24 19:49:28.762427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.310 [2024-07-24 19:49:28.762440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.310 [2024-07-24 19:49:28.762451] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/iscsi_trace.pid70457 for offline analysis/debug. 00:11:00.310 [2024-07-24 19:49:28.762617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.310 [2024-07-24 19:49:28.763529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.310 [2024-07-24 19:49:28.763610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.310 [2024-07-24 19:49:28.763614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.310 [2024-07-24 19:49:28.848097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:00.877 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.877 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@864 -- # return 0 00:11:00.877 iscsi_tgt is listening. Running tests... 00:11:00.877 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@50 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:00.877 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@52 -- # timing_exit start_iscsi_tgt 00:11:00.877 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:00.877 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:11:00.877 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@54 -- # mkdir -p ./tmp-trace 00:11:00.877 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@56 -- # record_pid=70492 00:11:00.877 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@57 -- # echo 'Trace record pid: 70492' 00:11:00.877 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace_record -s iscsi -p 70457 -f ./tmp-trace/record.trace -q 00:11:00.877 Trace record pid: 70492 00:11:00.877 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@59 -- # RPCS= 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@60 -- # RPCS+='iscsi_create_portal_group 1 10.0.0.1:3260\n' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@61 -- # RPCS+='iscsi_create_initiator_group 2 ANY 10.0.0.2/32\n' 00:11:00.878 Create bdevs and target nodes 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@63 -- # echo 'Create bdevs and target nodes' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@64 -- # CONNECTION_NUMBER=15 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # seq 0 15 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc0\n' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target0 Target0_alias Malloc0:0 1:2 256 -d\n' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc1\n' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target1 Target1_alias Malloc1:0 1:2 256 -d\n' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc2\n' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target2 Target2_alias Malloc2:0 1:2 256 -d\n' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc3\n' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target3 Target3_alias Malloc3:0 1:2 256 -d\n' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc4\n' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target4 Target4_alias Malloc4:0 1:2 256 -d\n' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc5\n' 00:11:00.878 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target5 Target5_alias Malloc5:0 1:2 256 -d\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc6\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target6 Target6_alias Malloc6:0 1:2 256 -d\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc7\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target7 Target7_alias Malloc7:0 1:2 256 -d\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc8\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target8 Target8_alias Malloc8:0 1:2 256 -d\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc9\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target9 Target9_alias Malloc9:0 1:2 256 -d\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc10\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target10 Target10_alias Malloc10:0 1:2 256 -d\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc11\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target11 Target11_alias Malloc11:0 1:2 256 -d\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc12\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target12 Target12_alias Malloc12:0 1:2 256 -d\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc13\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target13 Target13_alias Malloc13:0 1:2 256 -d\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc14\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target14 Target14_alias Malloc14:0 1:2 256 -d\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc15\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target15 Target15_alias Malloc15:0 1:2 256 -d\n' 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:01.137 19:49:29 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # echo -e iscsi_create_portal_group 1 '10.0.0.1:3260\niscsi_create_initiator_group' 2 ANY '10.0.0.2/32\nbdev_malloc_create' 64 4096 -b 'Malloc0\niscsi_create_target_node' Target0 Target0_alias Malloc0:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc1\niscsi_create_target_node' Target1 Target1_alias Malloc1:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc2\niscsi_create_target_node' Target2 Target2_alias Malloc2:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc3\niscsi_create_target_node' Target3 Target3_alias Malloc3:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc4\niscsi_create_target_node' Target4 Target4_alias Malloc4:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc5\niscsi_create_target_node' Target5 Target5_alias Malloc5:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc6\niscsi_create_target_node' Target6 Target6_alias Malloc6:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc7\niscsi_create_target_node' Target7 Target7_alias Malloc7:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc8\niscsi_create_target_node' Target8 Target8_alias Malloc8:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc9\niscsi_create_target_node' Target9 Target9_alias Malloc9:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc10\niscsi_create_target_node' Target10 Target10_alias Malloc10:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc11\niscsi_create_target_node' Target11 Target11_alias Malloc11:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc12\niscsi_create_target_node' Target12 Target12_alias Malloc12:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc13\niscsi_create_target_node' Target13 Target13_alias Malloc13:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc14\niscsi_create_target_node' Target14 Target14_alias Malloc14:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc15\niscsi_create_target_node' Target15 Target15_alias Malloc15:0 1:2 256 '-d\n' 00:11:02.071 Malloc0 00:11:02.071 Malloc1 00:11:02.071 Malloc2 00:11:02.071 Malloc3 00:11:02.071 Malloc4 00:11:02.071 Malloc5 00:11:02.071 Malloc6 00:11:02.071 Malloc7 00:11:02.071 Malloc8 00:11:02.071 Malloc9 00:11:02.071 Malloc10 00:11:02.071 Malloc11 00:11:02.071 Malloc12 00:11:02.071 Malloc13 00:11:02.071 Malloc14 00:11:02.071 Malloc15 00:11:02.071 19:49:30 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@71 -- # sleep 1 00:11:03.002 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@73 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target0 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:11:03.002 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:11:03.002 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@74 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:11:03.002 [2024-07-24 19:49:31.528297] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.002 [2024-07-24 19:49:31.541997] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.002 [2024-07-24 19:49:31.609296] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.002 [2024-07-24 19:49:31.622777] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.002 [2024-07-24 19:49:31.638925] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.002 [2024-07-24 19:49:31.655680] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.260 [2024-07-24 19:49:31.699096] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.260 [2024-07-24 19:49:31.721111] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.260 [2024-07-24 19:49:31.759798] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.260 [2024-07-24 19:49:31.790284] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.260 [2024-07-24 19:49:31.844172] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.260 [2024-07-24 19:49:31.856743] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.260 [2024-07-24 19:49:31.895415] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.260 [2024-07-24 19:49:31.907282] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:11:03.518 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:11:03.518 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:11:03.518 [2024-07-24 19:49:31.954466] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.518 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@75 -- # waitforiscsidevices 16 00:11:03.518 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@116 -- # local num=16 00:11:03.518 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:03.518 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:03.518 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:03.518 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:03.518 [2024-07-24 19:49:31.972907] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:03.518 Running FIO 00:11:03.518 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # n=16 00:11:03.518 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@120 -- # '[' 16 -ne 16 ']' 00:11:03.518 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@123 -- # return 0 00:11:03.518 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@77 -- # trap 'iscsicleanup; killprocess $iscsi_pid; killprocess $record_pid; delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:03.518 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@79 -- # echo 'Running FIO' 00:11:03.518 19:49:31 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 00:11:03.518 [global] 00:11:03.518 thread=1 00:11:03.518 invalidate=1 00:11:03.518 rw=randrw 00:11:03.518 time_based=1 00:11:03.518 runtime=1 00:11:03.518 ioengine=libaio 00:11:03.518 direct=1 00:11:03.518 bs=131072 00:11:03.518 iodepth=32 00:11:03.518 norandommap=1 00:11:03.518 numjobs=1 00:11:03.518 00:11:03.518 [job0] 00:11:03.518 filename=/dev/sda 00:11:03.518 [job1] 00:11:03.518 filename=/dev/sdb 00:11:03.518 [job2] 00:11:03.518 filename=/dev/sdc 00:11:03.518 [job3] 00:11:03.518 filename=/dev/sdd 00:11:03.518 [job4] 00:11:03.518 filename=/dev/sde 00:11:03.518 [job5] 00:11:03.518 filename=/dev/sdf 00:11:03.518 [job6] 00:11:03.518 filename=/dev/sdh 00:11:03.518 [job7] 00:11:03.518 filename=/dev/sdg 00:11:03.518 [job8] 00:11:03.518 filename=/dev/sdi 00:11:03.518 [job9] 00:11:03.518 filename=/dev/sdj 00:11:03.518 [job10] 00:11:03.518 filename=/dev/sdk 00:11:03.518 [job11] 00:11:03.518 filename=/dev/sdl 00:11:03.518 [job12] 00:11:03.518 filename=/dev/sdm 00:11:03.518 [job13] 00:11:03.518 filename=/dev/sdn 00:11:03.518 [job14] 00:11:03.518 filename=/dev/sdp 00:11:03.518 [job15] 00:11:03.518 filename=/dev/sdo 00:11:03.777 queue_depth set to 113 (sda) 00:11:03.777 queue_depth set to 113 (sdb) 00:11:03.777 queue_depth set to 113 (sdc) 00:11:03.777 queue_depth set to 113 (sdd) 00:11:03.777 queue_depth set to 113 (sde) 00:11:03.777 queue_depth set to 113 (sdf) 00:11:03.777 queue_depth set to 113 (sdh) 00:11:04.037 queue_depth set to 113 (sdg) 00:11:04.037 queue_depth set to 113 (sdi) 00:11:04.037 queue_depth set to 113 (sdj) 00:11:04.037 queue_depth set to 113 (sdk) 00:11:04.037 queue_depth set to 113 (sdl) 00:11:04.037 queue_depth set to 113 (sdm) 00:11:04.037 queue_depth set to 113 (sdn) 00:11:04.037 queue_depth set to 113 (sdp) 00:11:04.037 queue_depth set to 113 (sdo) 00:11:04.295 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:11:04.295 fio-3.35 00:11:04.295 Starting 16 threads 00:11:04.295 [2024-07-24 19:49:32.753793] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.295 [2024-07-24 19:49:32.756002] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.295 [2024-07-24 19:49:32.758667] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.295 [2024-07-24 19:49:32.760801] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.295 [2024-07-24 19:49:32.762593] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.295 [2024-07-24 19:49:32.764342] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.295 [2024-07-24 19:49:32.766636] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.295 [2024-07-24 19:49:32.768503] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.295 [2024-07-24 19:49:32.770108] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.295 [2024-07-24 19:49:32.771814] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.295 [2024-07-24 19:49:32.774234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.296 [2024-07-24 19:49:32.778060] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.296 [2024-07-24 19:49:32.780547] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.296 [2024-07-24 19:49:32.782568] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.296 [2024-07-24 19:49:32.785096] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:04.296 [2024-07-24 19:49:32.787395] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.180251] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.183658] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.185585] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.189093] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.192412] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.194840] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.197133] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.199282] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.202301] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.204690] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.206969] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.210908] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 [2024-07-24 19:49:34.213065] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.704 00:11:05.704 job0: (groupid=0, jobs=1): err= 0: pid=70872: Wed Jul 24 19:49:34 2024 00:11:05.704 read: IOPS=481, BW=60.2MiB/s (63.1MB/s)(62.8MiB/1043msec) 00:11:05.704 slat (usec): min=8, max=758, avg=26.20, stdev=57.25 00:11:05.704 clat (usec): min=2870, max=49118, avg=8756.61, stdev=4190.43 00:11:05.704 lat (usec): min=2884, max=49137, avg=8782.81, stdev=4187.85 00:11:05.704 clat percentiles (usec): 00:11:05.704 | 1.00th=[ 3490], 5.00th=[ 6587], 10.00th=[ 7373], 20.00th=[ 7635], 00:11:05.704 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8356], 00:11:05.704 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[10159], 95.00th=[12518], 00:11:05.704 | 99.00th=[17171], 99.50th=[46400], 99.90th=[49021], 99.95th=[49021], 00:11:05.704 | 99.99th=[49021] 00:11:05.704 bw ( KiB/s): min=63105, max=64000, per=6.54%, avg=63552.50, stdev=632.86, samples=2 00:11:05.704 iops : min= 493, max= 500, avg=496.50, stdev= 4.95, samples=2 00:11:05.704 write: IOPS=510, BW=63.8MiB/s (66.9MB/s)(66.5MiB/1043msec); 0 zone resets 00:11:05.704 slat (usec): min=11, max=1484, avg=39.43, stdev=80.24 00:11:05.704 clat (usec): min=5644, max=86666, avg=54266.66, stdev=11708.95 00:11:05.704 lat (usec): min=5671, max=86686, avg=54306.10, stdev=11699.75 00:11:05.704 clat percentiles (usec): 00:11:05.704 | 1.00th=[ 7570], 5.00th=[23725], 10.00th=[48497], 20.00th=[51643], 00:11:05.704 | 30.00th=[54264], 40.00th=[55313], 50.00th=[56361], 60.00th=[57410], 00:11:05.704 | 70.00th=[57934], 80.00th=[58983], 90.00th=[61080], 95.00th=[64226], 00:11:05.704 | 99.00th=[86508], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:11:05.704 | 99.99th=[86508] 00:11:05.704 bw ( KiB/s): min=64383, max=65024, per=6.50%, avg=64703.50, stdev=453.26, samples=2 00:11:05.704 iops : min= 502, max= 508, avg=505.00, stdev= 4.24, samples=2 00:11:05.704 lat (msec) : 4=1.06%, 10=43.42%, 20=6.00%, 50=4.45%, 100=45.07% 00:11:05.704 cpu : usr=0.67%, sys=2.11%, ctx=977, majf=0, minf=1 00:11:05.704 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=97.0%, >=64=0.0% 00:11:05.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.704 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.704 issued rwts: total=502,532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.704 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.704 job1: (groupid=0, jobs=1): err= 0: pid=70877: Wed Jul 24 19:49:34 2024 00:11:05.704 read: IOPS=455, BW=56.9MiB/s (59.7MB/s)(58.8MiB/1032msec) 00:11:05.704 slat (usec): min=9, max=818, avg=27.53, stdev=53.99 00:11:05.704 clat (usec): min=3443, max=36976, avg=8731.69, stdev=3379.50 00:11:05.704 lat (usec): min=3464, max=36999, avg=8759.23, stdev=3377.45 00:11:05.704 clat percentiles (usec): 00:11:05.704 | 1.00th=[ 6259], 5.00th=[ 6915], 10.00th=[ 7373], 20.00th=[ 7570], 00:11:05.704 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8356], 00:11:05.704 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[10683], 00:11:05.704 | 99.00th=[33424], 99.50th=[34866], 99.90th=[36963], 99.95th=[36963], 00:11:05.704 | 99.99th=[36963] 00:11:05.704 bw ( KiB/s): min=57856, max=61050, per=6.12%, avg=59453.00, stdev=2258.50, samples=2 00:11:05.704 iops : min= 452, max= 476, avg=464.00, stdev=16.97, samples=2 00:11:05.704 write: IOPS=508, BW=63.6MiB/s (66.7MB/s)(65.6MiB/1032msec); 0 zone resets 00:11:05.704 slat (usec): min=12, max=400, avg=33.77, stdev=36.40 00:11:05.704 clat (usec): min=11743, max=74747, avg=54888.92, stdev=6468.64 00:11:05.704 lat (usec): min=11764, max=74770, avg=54922.69, stdev=6470.43 00:11:05.704 clat percentiles (usec): 00:11:05.704 | 1.00th=[24773], 5.00th=[46924], 10.00th=[50070], 20.00th=[51643], 00:11:05.704 | 30.00th=[53216], 40.00th=[54264], 50.00th=[55313], 60.00th=[56361], 00:11:05.704 | 70.00th=[57410], 80.00th=[58983], 90.00th=[61080], 95.00th=[62653], 00:11:05.704 | 99.00th=[65799], 99.50th=[68682], 99.90th=[74974], 99.95th=[74974], 00:11:05.704 | 99.99th=[74974] 00:11:05.704 bw ( KiB/s): min=62845, max=65024, per=6.42%, avg=63934.50, stdev=1540.79, samples=2 00:11:05.704 iops : min= 490, max= 508, avg=499.00, stdev=12.73, samples=2 00:11:05.704 lat (msec) : 4=0.10%, 10=43.52%, 20=3.02%, 50=5.23%, 100=48.14% 00:11:05.704 cpu : usr=0.78%, sys=2.33%, ctx=923, majf=0, minf=1 00:11:05.704 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=96.9%, >=64=0.0% 00:11:05.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.704 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.704 issued rwts: total=470,525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.704 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.704 job2: (groupid=0, jobs=1): err= 0: pid=70896: Wed Jul 24 19:49:34 2024 00:11:05.704 read: IOPS=456, BW=57.1MiB/s (59.9MB/s)(60.1MiB/1053msec) 00:11:05.704 slat (usec): min=8, max=901, avg=27.81, stdev=70.58 00:11:05.704 clat (usec): min=629, max=57415, avg=9656.98, stdev=4529.66 00:11:05.704 lat (usec): min=1504, max=57438, avg=9684.79, stdev=4524.80 00:11:05.704 clat percentiles (usec): 00:11:05.704 | 1.00th=[ 4015], 5.00th=[ 7767], 10.00th=[ 8160], 20.00th=[ 8586], 00:11:05.704 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:11:05.704 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11600], 00:11:05.704 | 99.00th=[20055], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:11:05.704 | 99.99th=[57410] 00:11:05.704 bw ( KiB/s): min=56974, max=65024, per=6.28%, avg=60999.00, stdev=5692.21, samples=2 00:11:05.704 iops : min= 445, max= 508, avg=476.50, stdev=44.55, samples=2 00:11:05.704 write: IOPS=447, BW=55.9MiB/s (58.6MB/s)(58.9MiB/1053msec); 0 zone resets 00:11:05.704 slat (usec): min=13, max=810, avg=38.61, stdev=58.51 00:11:05.704 clat (msec): min=3, max=112, avg=61.36, stdev=11.43 00:11:05.704 lat (msec): min=3, max=112, avg=61.40, stdev=11.43 00:11:05.704 clat percentiles (msec): 00:11:05.705 | 1.00th=[ 11], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 57], 00:11:05.705 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 64], 00:11:05.705 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 70], 95.00th=[ 74], 00:11:05.705 | 99.00th=[ 102], 99.50th=[ 108], 99.90th=[ 112], 99.95th=[ 112], 00:11:05.705 | 99.99th=[ 112] 00:11:05.705 bw ( KiB/s): min=56718, max=56832, per=5.70%, avg=56775.00, stdev=80.61, samples=2 00:11:05.705 iops : min= 443, max= 444, avg=443.50, stdev= 0.71, samples=2 00:11:05.705 lat (usec) : 750=0.11% 00:11:05.705 lat (msec) : 2=0.11%, 4=0.42%, 10=41.28%, 20=8.93%, 50=1.68% 00:11:05.705 lat (msec) : 100=46.74%, 250=0.74% 00:11:05.705 cpu : usr=0.67%, sys=2.09%, ctx=877, majf=0, minf=1 00:11:05.705 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=96.7%, >=64=0.0% 00:11:05.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.705 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.705 issued rwts: total=481,471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.705 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.705 job3: (groupid=0, jobs=1): err= 0: pid=70916: Wed Jul 24 19:49:34 2024 00:11:05.705 read: IOPS=475, BW=59.5MiB/s (62.4MB/s)(61.6MiB/1036msec) 00:11:05.705 slat (usec): min=8, max=1428, avg=30.54, stdev=81.38 00:11:05.705 clat (usec): min=4964, max=38197, avg=8692.72, stdev=2737.69 00:11:05.705 lat (usec): min=6047, max=38220, avg=8723.27, stdev=2733.35 00:11:05.705 clat percentiles (usec): 00:11:05.705 | 1.00th=[ 6325], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7635], 00:11:05.705 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8356], 00:11:05.705 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[11731], 00:11:05.705 | 99.00th=[17433], 99.50th=[35914], 99.90th=[38011], 99.95th=[38011], 00:11:05.705 | 99.99th=[38011] 00:11:05.705 bw ( KiB/s): min=58880, max=66427, per=6.45%, avg=62653.50, stdev=5336.53, samples=2 00:11:05.705 iops : min= 460, max= 518, avg=489.00, stdev=41.01, samples=2 00:11:05.705 write: IOPS=511, BW=63.9MiB/s (67.1MB/s)(66.2MiB/1036msec); 0 zone resets 00:11:05.705 slat (usec): min=13, max=10671, avg=59.88, stdev=465.54 00:11:05.705 clat (usec): min=8996, max=87812, avg=53899.15, stdev=8762.11 00:11:05.705 lat (usec): min=11944, max=87846, avg=53959.03, stdev=8670.57 00:11:05.705 clat percentiles (usec): 00:11:05.705 | 1.00th=[11994], 5.00th=[40633], 10.00th=[47973], 20.00th=[50594], 00:11:05.705 | 30.00th=[52691], 40.00th=[53740], 50.00th=[54789], 60.00th=[55837], 00:11:05.705 | 70.00th=[56886], 80.00th=[58459], 90.00th=[60031], 95.00th=[61604], 00:11:05.705 | 99.00th=[82314], 99.50th=[83362], 99.90th=[87557], 99.95th=[87557], 00:11:05.705 | 99.99th=[87557] 00:11:05.705 bw ( KiB/s): min=62594, max=65792, per=6.45%, avg=64193.00, stdev=2261.33, samples=2 00:11:05.705 iops : min= 489, max= 514, avg=501.50, stdev=17.68, samples=2 00:11:05.705 lat (msec) : 10=44.28%, 20=4.69%, 50=8.70%, 100=42.33% 00:11:05.705 cpu : usr=0.87%, sys=1.93%, ctx=960, majf=0, minf=1 00:11:05.705 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=97.0%, >=64=0.0% 00:11:05.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.705 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.705 issued rwts: total=493,530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.705 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.705 job4: (groupid=0, jobs=1): err= 0: pid=70973: Wed Jul 24 19:49:34 2024 00:11:05.705 read: IOPS=499, BW=62.4MiB/s (65.4MB/s)(64.2MiB/1030msec) 00:11:05.705 slat (usec): min=8, max=1411, avg=25.69, stdev=68.35 00:11:05.705 clat (usec): min=4657, max=35293, avg=8958.28, stdev=3357.23 00:11:05.705 lat (usec): min=6068, max=35317, avg=8983.97, stdev=3352.91 00:11:05.705 clat percentiles (usec): 00:11:05.705 | 1.00th=[ 6849], 5.00th=[ 7373], 10.00th=[ 7504], 20.00th=[ 7701], 00:11:05.705 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8356], 00:11:05.705 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9896], 95.00th=[14222], 00:11:05.705 | 99.00th=[29492], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:11:05.705 | 99.99th=[35390] 00:11:05.705 bw ( KiB/s): min=62464, max=67584, per=6.69%, avg=65024.00, stdev=3620.39, samples=2 00:11:05.705 iops : min= 488, max= 528, avg=508.00, stdev=28.28, samples=2 00:11:05.705 write: IOPS=504, BW=63.1MiB/s (66.2MB/s)(65.0MiB/1030msec); 0 zone resets 00:11:05.705 slat (usec): min=12, max=1036, avg=39.31, stdev=75.25 00:11:05.705 clat (usec): min=7592, max=74430, avg=54350.96, stdev=6772.21 00:11:05.705 lat (usec): min=7619, max=74453, avg=54390.27, stdev=6775.30 00:11:05.705 clat percentiles (usec): 00:11:05.705 | 1.00th=[25297], 5.00th=[44827], 10.00th=[48497], 20.00th=[51119], 00:11:05.705 | 30.00th=[53216], 40.00th=[54264], 50.00th=[55313], 60.00th=[56361], 00:11:05.705 | 70.00th=[56886], 80.00th=[58459], 90.00th=[60031], 95.00th=[61604], 00:11:05.705 | 99.00th=[67634], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:11:05.705 | 99.99th=[73925] 00:11:05.705 bw ( KiB/s): min=61696, max=64768, per=6.35%, avg=63232.00, stdev=2172.23, samples=2 00:11:05.705 iops : min= 482, max= 506, avg=494.00, stdev=16.97, samples=2 00:11:05.705 lat (msec) : 10=44.87%, 20=4.26%, 50=7.83%, 100=43.04% 00:11:05.705 cpu : usr=0.97%, sys=1.94%, ctx=942, majf=0, minf=1 00:11:05.705 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=97.0%, >=64=0.0% 00:11:05.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.705 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.705 issued rwts: total=514,520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.705 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.705 job5: (groupid=0, jobs=1): err= 0: pid=70975: Wed Jul 24 19:49:34 2024 00:11:05.705 read: IOPS=519, BW=65.0MiB/s (68.1MB/s)(67.8MiB/1043msec) 00:11:05.705 slat (usec): min=8, max=908, avg=24.84, stdev=49.59 00:11:05.705 clat (usec): min=1804, max=46754, avg=8426.55, stdev=3111.71 00:11:05.705 lat (usec): min=1816, max=46767, avg=8451.39, stdev=3110.98 00:11:05.705 clat percentiles (usec): 00:11:05.705 | 1.00th=[ 5342], 5.00th=[ 6915], 10.00th=[ 7242], 20.00th=[ 7570], 00:11:05.705 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8160], 00:11:05.705 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[ 9896], 00:11:05.705 | 99.00th=[17695], 99.50th=[43254], 99.90th=[46924], 99.95th=[46924], 00:11:05.705 | 99.99th=[46924] 00:11:05.705 bw ( KiB/s): min=60537, max=77312, per=7.09%, avg=68924.50, stdev=11861.72, samples=2 00:11:05.705 iops : min= 472, max= 604, avg=538.00, stdev=93.34, samples=2 00:11:05.705 write: IOPS=513, BW=64.2MiB/s (67.4MB/s)(67.0MiB/1043msec); 0 zone resets 00:11:05.705 slat (usec): min=12, max=3586, avg=44.28, stdev=164.03 00:11:05.705 clat (usec): min=5643, max=81745, avg=53553.98, stdev=9054.86 00:11:05.705 lat (usec): min=5671, max=81771, avg=53598.26, stdev=9049.84 00:11:05.705 clat percentiles (usec): 00:11:05.705 | 1.00th=[14222], 5.00th=[44303], 10.00th=[48497], 20.00th=[50070], 00:11:05.705 | 30.00th=[51643], 40.00th=[52691], 50.00th=[53740], 60.00th=[54789], 00:11:05.705 | 70.00th=[56361], 80.00th=[57934], 90.00th=[60031], 95.00th=[63701], 00:11:05.705 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:11:05.705 | 99.99th=[81265] 00:11:05.705 bw ( KiB/s): min=64256, max=65667, per=6.52%, avg=64961.50, stdev=997.73, samples=2 00:11:05.705 iops : min= 502, max= 513, avg=507.50, stdev= 7.78, samples=2 00:11:05.705 lat (msec) : 2=0.09%, 10=48.05%, 20=2.97%, 50=7.70%, 100=41.19% 00:11:05.705 cpu : usr=0.86%, sys=2.02%, ctx=996, majf=0, minf=1 00:11:05.705 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=97.1%, >=64=0.0% 00:11:05.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.705 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.705 issued rwts: total=542,536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.705 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.705 job6: (groupid=0, jobs=1): err= 0: pid=70976: Wed Jul 24 19:49:34 2024 00:11:05.705 read: IOPS=444, BW=55.6MiB/s (58.3MB/s)(57.5MiB/1034msec) 00:11:05.705 slat (usec): min=7, max=341, avg=24.23, stdev=38.23 00:11:05.705 clat (usec): min=5242, max=33378, avg=9258.45, stdev=1538.45 00:11:05.705 lat (usec): min=5252, max=33387, avg=9282.68, stdev=1536.83 00:11:05.705 clat percentiles (usec): 00:11:05.705 | 1.00th=[ 5932], 5.00th=[ 7701], 10.00th=[ 8225], 20.00th=[ 8586], 00:11:05.705 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:11:05.705 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[11076], 00:11:05.705 | 99.00th=[12649], 99.50th=[13042], 99.90th=[33424], 99.95th=[33424], 00:11:05.705 | 99.99th=[33424] 00:11:05.705 bw ( KiB/s): min=58368, max=59136, per=6.04%, avg=58752.00, stdev=543.06, samples=2 00:11:05.705 iops : min= 456, max= 462, avg=459.00, stdev= 4.24, samples=2 00:11:05.705 write: IOPS=456, BW=57.1MiB/s (59.8MB/s)(59.0MiB/1034msec); 0 zone resets 00:11:05.705 slat (usec): min=11, max=942, avg=37.19, stdev=59.13 00:11:05.705 clat (usec): min=10272, max=84617, avg=60879.08, stdev=8860.25 00:11:05.705 lat (usec): min=10309, max=84646, avg=60916.27, stdev=8861.97 00:11:05.705 clat percentiles (usec): 00:11:05.705 | 1.00th=[21890], 5.00th=[41681], 10.00th=[53740], 20.00th=[57410], 00:11:05.705 | 30.00th=[59507], 40.00th=[61080], 50.00th=[62653], 60.00th=[64226], 00:11:05.705 | 70.00th=[65274], 80.00th=[66847], 90.00th=[67634], 95.00th=[69731], 00:11:05.705 | 99.00th=[76022], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:11:05.705 | 99.99th=[84411] 00:11:05.705 bw ( KiB/s): min=55808, max=57088, per=5.67%, avg=56448.00, stdev=905.10, samples=2 00:11:05.705 iops : min= 436, max= 446, avg=441.00, stdev= 7.07, samples=2 00:11:05.705 lat (msec) : 10=42.49%, 20=7.19%, 50=3.11%, 100=47.21% 00:11:05.705 cpu : usr=0.87%, sys=1.45%, ctx=858, majf=0, minf=1 00:11:05.705 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=96.7%, >=64=0.0% 00:11:05.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.705 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.705 issued rwts: total=460,472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.705 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.705 job7: (groupid=0, jobs=1): err= 0: pid=70977: Wed Jul 24 19:49:34 2024 00:11:05.705 read: IOPS=459, BW=57.4MiB/s (60.2MB/s)(59.4MiB/1034msec) 00:11:05.705 slat (usec): min=8, max=597, avg=29.26, stdev=54.52 00:11:05.705 clat (usec): min=6188, max=26760, avg=8680.44, stdev=2352.67 00:11:05.705 lat (usec): min=6226, max=26772, avg=8709.71, stdev=2349.00 00:11:05.705 clat percentiles (usec): 00:11:05.705 | 1.00th=[ 6390], 5.00th=[ 6980], 10.00th=[ 7373], 20.00th=[ 7635], 00:11:05.705 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8356], 00:11:05.705 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9765], 95.00th=[12649], 00:11:05.706 | 99.00th=[19530], 99.50th=[25560], 99.90th=[26870], 99.95th=[26870], 00:11:05.706 | 99.99th=[26870] 00:11:05.706 bw ( KiB/s): min=56576, max=64897, per=6.25%, avg=60736.50, stdev=5883.84, samples=2 00:11:05.706 iops : min= 442, max= 507, avg=474.50, stdev=45.96, samples=2 00:11:05.706 write: IOPS=510, BW=63.8MiB/s (66.9MB/s)(66.0MiB/1034msec); 0 zone resets 00:11:05.706 slat (usec): min=11, max=378, avg=37.49, stdev=38.83 00:11:05.706 clat (usec): min=11411, max=78749, avg=54703.36, stdev=6892.09 00:11:05.706 lat (usec): min=11435, max=78780, avg=54740.86, stdev=6895.33 00:11:05.706 clat percentiles (usec): 00:11:05.706 | 1.00th=[26084], 5.00th=[45351], 10.00th=[48497], 20.00th=[51119], 00:11:05.706 | 30.00th=[53216], 40.00th=[54264], 50.00th=[55313], 60.00th=[56361], 00:11:05.706 | 70.00th=[57410], 80.00th=[58983], 90.00th=[60556], 95.00th=[63177], 00:11:05.706 | 99.00th=[70779], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:11:05.706 | 99.99th=[79168] 00:11:05.706 bw ( KiB/s): min=62332, max=64512, per=6.37%, avg=63422.00, stdev=1541.49, samples=2 00:11:05.706 iops : min= 486, max= 504, avg=495.00, stdev=12.73, samples=2 00:11:05.706 lat (msec) : 10=43.27%, 20=3.99%, 50=7.18%, 100=45.56% 00:11:05.706 cpu : usr=1.06%, sys=1.74%, ctx=947, majf=0, minf=1 00:11:05.706 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=96.9%, >=64=0.0% 00:11:05.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.706 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.706 issued rwts: total=475,528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.706 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.706 job8: (groupid=0, jobs=1): err= 0: pid=70978: Wed Jul 24 19:49:34 2024 00:11:05.706 read: IOPS=458, BW=57.3MiB/s (60.0MB/s)(58.8MiB/1026msec) 00:11:05.706 slat (usec): min=7, max=344, avg=23.58, stdev=36.77 00:11:05.706 clat (usec): min=1355, max=36684, avg=8621.50, stdev=3406.14 00:11:05.706 lat (usec): min=1365, max=36700, avg=8645.07, stdev=3406.23 00:11:05.706 clat percentiles (usec): 00:11:05.706 | 1.00th=[ 2802], 5.00th=[ 6456], 10.00th=[ 7242], 20.00th=[ 7570], 00:11:05.706 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8225], 00:11:05.706 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9634], 95.00th=[12649], 00:11:05.706 | 99.00th=[30802], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:11:05.706 | 99.99th=[36439] 00:11:05.706 bw ( KiB/s): min=58624, max=60160, per=6.11%, avg=59392.00, stdev=1086.12, samples=2 00:11:05.706 iops : min= 458, max= 470, avg=464.00, stdev= 8.49, samples=2 00:11:05.706 write: IOPS=504, BW=63.1MiB/s (66.2MB/s)(64.8MiB/1026msec); 0 zone resets 00:11:05.706 slat (usec): min=10, max=1091, avg=43.78, stdev=78.20 00:11:05.706 clat (usec): min=7548, max=81986, avg=55404.10, stdev=7222.05 00:11:05.706 lat (usec): min=7576, max=82006, avg=55447.88, stdev=7222.62 00:11:05.706 clat percentiles (usec): 00:11:05.706 | 1.00th=[25297], 5.00th=[46400], 10.00th=[49546], 20.00th=[52167], 00:11:05.706 | 30.00th=[53740], 40.00th=[55313], 50.00th=[55837], 60.00th=[56886], 00:11:05.706 | 70.00th=[57934], 80.00th=[59507], 90.00th=[62129], 95.00th=[63177], 00:11:05.706 | 99.00th=[71828], 99.50th=[81265], 99.90th=[82314], 99.95th=[82314], 00:11:05.706 | 99.99th=[82314] 00:11:05.706 bw ( KiB/s): min=62208, max=63744, per=6.32%, avg=62976.00, stdev=1086.12, samples=2 00:11:05.706 iops : min= 486, max= 498, avg=492.00, stdev= 8.49, samples=2 00:11:05.706 lat (msec) : 2=0.30%, 4=0.61%, 10=42.51%, 20=3.95%, 50=6.38% 00:11:05.706 lat (msec) : 100=46.26% 00:11:05.706 cpu : usr=1.37%, sys=1.37%, ctx=909, majf=0, minf=1 00:11:05.706 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=96.9%, >=64=0.0% 00:11:05.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.706 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.706 issued rwts: total=470,518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.706 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.706 job9: (groupid=0, jobs=1): err= 0: pid=70979: Wed Jul 24 19:49:34 2024 00:11:05.706 read: IOPS=516, BW=64.6MiB/s (67.7MB/s)(67.6MiB/1047msec) 00:11:05.706 slat (usec): min=7, max=425, avg=21.75, stdev=30.44 00:11:05.706 clat (usec): min=637, max=19732, avg=8177.69, stdev=1728.80 00:11:05.706 lat (usec): min=648, max=19754, avg=8199.44, stdev=1729.70 00:11:05.706 clat percentiles (usec): 00:11:05.706 | 1.00th=[ 4686], 5.00th=[ 5276], 10.00th=[ 6915], 20.00th=[ 7439], 00:11:05.706 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8291], 00:11:05.706 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[10290], 00:11:05.706 | 99.00th=[14484], 99.50th=[17695], 99.90th=[19792], 99.95th=[19792], 00:11:05.706 | 99.99th=[19792] 00:11:05.706 bw ( KiB/s): min=62976, max=75671, per=7.13%, avg=69323.50, stdev=8976.72, samples=2 00:11:05.706 iops : min= 492, max= 591, avg=541.50, stdev=70.00, samples=2 00:11:05.706 write: IOPS=507, BW=63.4MiB/s (66.5MB/s)(66.4MiB/1047msec); 0 zone resets 00:11:05.706 slat (usec): min=10, max=6365, avg=43.59, stdev=276.21 00:11:05.706 clat (usec): min=1080, max=98964, avg=54567.84, stdev=10720.74 00:11:05.706 lat (usec): min=1122, max=98997, avg=54611.43, stdev=10720.19 00:11:05.706 clat percentiles (usec): 00:11:05.706 | 1.00th=[ 9896], 5.00th=[38011], 10.00th=[49021], 20.00th=[51643], 00:11:05.706 | 30.00th=[53216], 40.00th=[54264], 50.00th=[55313], 60.00th=[56361], 00:11:05.706 | 70.00th=[57934], 80.00th=[58983], 90.00th=[61604], 95.00th=[64226], 00:11:05.706 | 99.00th=[92799], 99.50th=[95945], 99.90th=[99091], 99.95th=[99091], 00:11:05.706 | 99.99th=[99091] 00:11:05.706 bw ( KiB/s): min=63615, max=64256, per=6.42%, avg=63935.50, stdev=453.26, samples=2 00:11:05.706 iops : min= 496, max= 502, avg=499.00, stdev= 4.24, samples=2 00:11:05.706 lat (usec) : 750=0.19% 00:11:05.706 lat (msec) : 2=0.19%, 10=47.39%, 20=4.29%, 50=4.48%, 100=43.47% 00:11:05.706 cpu : usr=1.53%, sys=1.43%, ctx=931, majf=0, minf=1 00:11:05.706 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=97.1%, >=64=0.0% 00:11:05.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.706 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.706 issued rwts: total=541,531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.706 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.706 job10: (groupid=0, jobs=1): err= 0: pid=70980: Wed Jul 24 19:49:34 2024 00:11:05.706 read: IOPS=482, BW=60.3MiB/s (63.2MB/s)(63.2MiB/1049msec) 00:11:05.706 slat (usec): min=8, max=3832, avg=29.75, stdev=172.65 00:11:05.706 clat (usec): min=924, max=56473, avg=9847.93, stdev=4804.01 00:11:05.706 lat (usec): min=937, max=56497, avg=9877.68, stdev=4804.06 00:11:05.706 clat percentiles (usec): 00:11:05.706 | 1.00th=[ 4490], 5.00th=[ 7373], 10.00th=[ 7898], 20.00th=[ 8455], 00:11:05.706 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:11:05.706 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10814], 95.00th=[12780], 00:11:05.706 | 99.00th=[25035], 99.50th=[53740], 99.90th=[56361], 99.95th=[56361], 00:11:05.706 | 99.99th=[56361] 00:11:05.706 bw ( KiB/s): min=58368, max=69888, per=6.60%, avg=64128.00, stdev=8145.87, samples=2 00:11:05.706 iops : min= 456, max= 546, avg=501.00, stdev=63.64, samples=2 00:11:05.706 write: IOPS=447, BW=55.9MiB/s (58.6MB/s)(58.6MiB/1049msec); 0 zone resets 00:11:05.706 slat (usec): min=11, max=6026, avg=51.09, stdev=317.88 00:11:05.706 clat (usec): min=5087, max=87713, avg=60692.61, stdev=10195.74 00:11:05.706 lat (usec): min=5151, max=87745, avg=60743.70, stdev=10190.76 00:11:05.706 clat percentiles (usec): 00:11:05.706 | 1.00th=[10945], 5.00th=[47449], 10.00th=[53740], 20.00th=[56361], 00:11:05.706 | 30.00th=[58459], 40.00th=[60031], 50.00th=[61604], 60.00th=[63177], 00:11:05.706 | 70.00th=[64750], 80.00th=[66323], 90.00th=[69731], 95.00th=[71828], 00:11:05.706 | 99.00th=[87557], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:11:05.706 | 99.99th=[87557] 00:11:05.706 bw ( KiB/s): min=56320, max=56832, per=5.68%, avg=56576.00, stdev=362.04, samples=2 00:11:05.706 iops : min= 440, max= 444, avg=442.00, stdev= 2.83, samples=2 00:11:05.706 lat (usec) : 1000=0.21% 00:11:05.706 lat (msec) : 10=41.95%, 20=9.44%, 50=2.87%, 100=45.54% 00:11:05.706 cpu : usr=1.43%, sys=1.34%, ctx=856, majf=0, minf=1 00:11:05.706 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=96.8%, >=64=0.0% 00:11:05.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.706 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.706 issued rwts: total=506,469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.706 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.706 job11: (groupid=0, jobs=1): err= 0: pid=70981: Wed Jul 24 19:49:34 2024 00:11:05.706 read: IOPS=485, BW=60.7MiB/s (63.6MB/s)(63.2MiB/1042msec) 00:11:05.706 slat (usec): min=9, max=681, avg=22.70, stdev=39.48 00:11:05.706 clat (usec): min=1003, max=45712, avg=8832.56, stdev=3781.67 00:11:05.706 lat (usec): min=1018, max=45743, avg=8855.26, stdev=3782.43 00:11:05.706 clat percentiles (usec): 00:11:05.706 | 1.00th=[ 1549], 5.00th=[ 7177], 10.00th=[ 7439], 20.00th=[ 7767], 00:11:05.706 | 30.00th=[ 7963], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8455], 00:11:05.706 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[10290], 95.00th=[12780], 00:11:05.706 | 99.00th=[18482], 99.50th=[43779], 99.90th=[45876], 99.95th=[45876], 00:11:05.706 | 99.99th=[45876] 00:11:05.706 bw ( KiB/s): min=57996, max=70400, per=6.60%, avg=64198.00, stdev=8770.95, samples=2 00:11:05.706 iops : min= 453, max= 550, avg=501.50, stdev=68.59, samples=2 00:11:05.706 write: IOPS=499, BW=62.4MiB/s (65.4MB/s)(65.0MiB/1042msec); 0 zone resets 00:11:05.706 slat (usec): min=13, max=670, avg=39.73, stdev=62.55 00:11:05.706 clat (msec): min=3, max=107, avg=55.35, stdev= 9.66 00:11:05.706 lat (msec): min=3, max=107, avg=55.39, stdev= 9.66 00:11:05.706 clat percentiles (msec): 00:11:05.706 | 1.00th=[ 13], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 52], 00:11:05.706 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:11:05.706 | 70.00th=[ 58], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 66], 00:11:05.706 | 99.00th=[ 88], 99.50th=[ 88], 99.90th=[ 108], 99.95th=[ 108], 00:11:05.706 | 99.99th=[ 108] 00:11:05.706 bw ( KiB/s): min=62464, max=63616, per=6.33%, avg=63040.00, stdev=814.59, samples=2 00:11:05.706 iops : min= 488, max= 497, avg=492.50, stdev= 6.36, samples=2 00:11:05.706 lat (msec) : 2=0.68%, 4=0.39%, 10=42.88%, 20=5.95%, 50=6.14% 00:11:05.706 lat (msec) : 100=43.76%, 250=0.19% 00:11:05.706 cpu : usr=1.06%, sys=1.54%, ctx=939, majf=0, minf=1 00:11:05.706 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=97.0%, >=64=0.0% 00:11:05.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.706 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.707 issued rwts: total=506,520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.707 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.707 job12: (groupid=0, jobs=1): err= 0: pid=70982: Wed Jul 24 19:49:34 2024 00:11:05.707 read: IOPS=539, BW=67.4MiB/s (70.7MB/s)(69.9MiB/1037msec) 00:11:05.707 slat (usec): min=8, max=935, avg=26.53, stdev=50.00 00:11:05.707 clat (usec): min=3576, max=42452, avg=8721.28, stdev=3345.93 00:11:05.707 lat (usec): min=3589, max=42464, avg=8747.81, stdev=3343.00 00:11:05.707 clat percentiles (usec): 00:11:05.707 | 1.00th=[ 5866], 5.00th=[ 7242], 10.00th=[ 7439], 20.00th=[ 7635], 00:11:05.707 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8225], 00:11:05.707 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[10028], 95.00th=[12518], 00:11:05.707 | 99.00th=[16581], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:11:05.707 | 99.99th=[42206] 00:11:05.707 bw ( KiB/s): min=64768, max=77056, per=7.29%, avg=70912.00, stdev=8688.93, samples=2 00:11:05.707 iops : min= 506, max= 602, avg=554.00, stdev=67.88, samples=2 00:11:05.707 write: IOPS=505, BW=63.2MiB/s (66.2MB/s)(65.5MiB/1037msec); 0 zone resets 00:11:05.707 slat (usec): min=11, max=487, avg=36.30, stdev=41.26 00:11:05.707 clat (usec): min=4207, max=86568, avg=53808.88, stdev=7820.54 00:11:05.707 lat (usec): min=4254, max=86586, avg=53845.18, stdev=7820.90 00:11:05.707 clat percentiles (usec): 00:11:05.707 | 1.00th=[15795], 5.00th=[45876], 10.00th=[49546], 20.00th=[51119], 00:11:05.707 | 30.00th=[52167], 40.00th=[53216], 50.00th=[54264], 60.00th=[54789], 00:11:05.707 | 70.00th=[55837], 80.00th=[57410], 90.00th=[58983], 95.00th=[60556], 00:11:05.707 | 99.00th=[80217], 99.50th=[82314], 99.90th=[86508], 99.95th=[86508], 00:11:05.707 | 99.99th=[86508] 00:11:05.707 bw ( KiB/s): min=62464, max=64768, per=6.39%, avg=63616.00, stdev=1629.17, samples=2 00:11:05.707 iops : min= 488, max= 506, avg=497.00, stdev=12.73, samples=2 00:11:05.707 lat (msec) : 4=0.46%, 10=46.35%, 20=4.99%, 50=5.91%, 100=42.29% 00:11:05.707 cpu : usr=0.68%, sys=2.70%, ctx=967, majf=0, minf=1 00:11:05.707 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=97.1%, >=64=0.0% 00:11:05.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.707 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.707 issued rwts: total=559,524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.707 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.707 job13: (groupid=0, jobs=1): err= 0: pid=70983: Wed Jul 24 19:49:34 2024 00:11:05.707 read: IOPS=528, BW=66.0MiB/s (69.2MB/s)(68.0MiB/1030msec) 00:11:05.707 slat (usec): min=9, max=598, avg=23.49, stdev=38.04 00:11:05.707 clat (usec): min=2586, max=34734, avg=8588.17, stdev=2800.59 00:11:05.707 lat (usec): min=2601, max=34753, avg=8611.67, stdev=2798.77 00:11:05.707 clat percentiles (usec): 00:11:05.707 | 1.00th=[ 6128], 5.00th=[ 6783], 10.00th=[ 7177], 20.00th=[ 7570], 00:11:05.707 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8291], 00:11:05.707 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[11600], 00:11:05.707 | 99.00th=[21103], 99.50th=[31589], 99.90th=[34866], 99.95th=[34866], 00:11:05.707 | 99.99th=[34866] 00:11:05.707 bw ( KiB/s): min=66560, max=71567, per=7.10%, avg=69063.50, stdev=3540.48, samples=2 00:11:05.707 iops : min= 520, max= 559, avg=539.50, stdev=27.58, samples=2 00:11:05.707 write: IOPS=514, BW=64.3MiB/s (67.4MB/s)(66.2MiB/1030msec); 0 zone resets 00:11:05.707 slat (usec): min=12, max=337, avg=33.89, stdev=36.80 00:11:05.707 clat (usec): min=11153, max=76362, avg=53185.72, stdev=7001.51 00:11:05.707 lat (usec): min=11193, max=76396, avg=53219.61, stdev=7001.45 00:11:05.707 clat percentiles (usec): 00:11:05.707 | 1.00th=[25035], 5.00th=[42206], 10.00th=[45876], 20.00th=[49021], 00:11:05.707 | 30.00th=[51643], 40.00th=[53216], 50.00th=[54264], 60.00th=[54789], 00:11:05.707 | 70.00th=[55837], 80.00th=[57934], 90.00th=[60031], 95.00th=[61604], 00:11:05.707 | 99.00th=[69731], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:11:05.707 | 99.99th=[76022] 00:11:05.707 bw ( KiB/s): min=62845, max=66048, per=6.47%, avg=64446.50, stdev=2264.86, samples=2 00:11:05.707 iops : min= 490, max= 516, avg=503.00, stdev=18.38, samples=2 00:11:05.707 lat (msec) : 4=0.19%, 10=45.90%, 20=4.28%, 50=11.17%, 100=38.45% 00:11:05.707 cpu : usr=1.17%, sys=1.55%, ctx=989, majf=0, minf=1 00:11:05.707 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=97.1%, >=64=0.0% 00:11:05.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.707 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.707 issued rwts: total=544,530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.707 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.707 job14: (groupid=0, jobs=1): err= 0: pid=70984: Wed Jul 24 19:49:34 2024 00:11:05.707 read: IOPS=430, BW=53.8MiB/s (56.4MB/s)(55.9MiB/1038msec) 00:11:05.707 slat (usec): min=8, max=1770, avg=27.69, stdev=88.88 00:11:05.707 clat (usec): min=2411, max=45602, avg=9389.62, stdev=2961.40 00:11:05.707 lat (usec): min=2900, max=45622, avg=9417.31, stdev=2951.19 00:11:05.707 clat percentiles (usec): 00:11:05.707 | 1.00th=[ 3982], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[ 8586], 00:11:05.707 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:11:05.707 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10945], 00:11:05.707 | 99.00th=[14484], 99.50th=[38536], 99.90th=[45351], 99.95th=[45351], 00:11:05.707 | 99.99th=[45351] 00:11:05.707 bw ( KiB/s): min=54272, max=59392, per=5.85%, avg=56832.00, stdev=3620.39, samples=2 00:11:05.707 iops : min= 424, max= 464, avg=444.00, stdev=28.28, samples=2 00:11:05.707 write: IOPS=449, BW=56.2MiB/s (59.0MB/s)(58.4MiB/1038msec); 0 zone resets 00:11:05.707 slat (usec): min=11, max=1205, avg=36.05, stdev=63.01 00:11:05.707 clat (usec): min=8660, max=94650, avg=61757.79, stdev=9168.87 00:11:05.707 lat (usec): min=8689, max=94669, avg=61793.84, stdev=9165.90 00:11:05.707 clat percentiles (usec): 00:11:05.707 | 1.00th=[20055], 5.00th=[49546], 10.00th=[55313], 20.00th=[57934], 00:11:05.707 | 30.00th=[59507], 40.00th=[61080], 50.00th=[62129], 60.00th=[63701], 00:11:05.707 | 70.00th=[65274], 80.00th=[67634], 90.00th=[68682], 95.00th=[70779], 00:11:05.707 | 99.00th=[88605], 99.50th=[90702], 99.90th=[94897], 99.95th=[94897], 00:11:05.707 | 99.99th=[94897] 00:11:05.707 bw ( KiB/s): min=55808, max=56576, per=5.64%, avg=56192.00, stdev=543.06, samples=2 00:11:05.707 iops : min= 436, max= 442, avg=439.00, stdev= 4.24, samples=2 00:11:05.707 lat (msec) : 4=0.66%, 10=41.79%, 20=6.56%, 50=2.63%, 100=48.36% 00:11:05.707 cpu : usr=0.29%, sys=2.41%, ctx=832, majf=0, minf=1 00:11:05.707 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=96.6%, >=64=0.0% 00:11:05.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.707 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.707 issued rwts: total=447,467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.707 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.707 job15: (groupid=0, jobs=1): err= 0: pid=70985: Wed Jul 24 19:49:34 2024 00:11:05.707 read: IOPS=470, BW=58.8MiB/s (61.7MB/s)(60.9MiB/1035msec) 00:11:05.707 slat (usec): min=8, max=1209, avg=26.14, stdev=62.49 00:11:05.707 clat (usec): min=5035, max=44128, avg=8811.86, stdev=3761.10 00:11:05.707 lat (usec): min=5049, max=44142, avg=8838.00, stdev=3759.92 00:11:05.707 clat percentiles (usec): 00:11:05.707 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7635], 00:11:05.707 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8356], 00:11:05.707 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[12125], 00:11:05.707 | 99.00th=[39584], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:11:05.707 | 99.99th=[44303] 00:11:05.707 bw ( KiB/s): min=56832, max=66693, per=6.35%, avg=61762.50, stdev=6972.78, samples=2 00:11:05.707 iops : min= 444, max= 521, avg=482.50, stdev=54.45, samples=2 00:11:05.707 write: IOPS=501, BW=62.7MiB/s (65.7MB/s)(64.9MiB/1035msec); 0 zone resets 00:11:05.707 slat (usec): min=13, max=607, avg=36.15, stdev=41.84 00:11:05.707 clat (usec): min=11867, max=91240, avg=55361.04, stdev=7570.08 00:11:05.707 lat (usec): min=11926, max=91267, avg=55397.18, stdev=7570.72 00:11:05.707 clat percentiles (usec): 00:11:05.707 | 1.00th=[27132], 5.00th=[45876], 10.00th=[49546], 20.00th=[52167], 00:11:05.707 | 30.00th=[53216], 40.00th=[54264], 50.00th=[55313], 60.00th=[56361], 00:11:05.707 | 70.00th=[57934], 80.00th=[58983], 90.00th=[61080], 95.00th=[63177], 00:11:05.707 | 99.00th=[83362], 99.50th=[90702], 99.90th=[91751], 99.95th=[91751], 00:11:05.707 | 99.99th=[91751] 00:11:05.707 bw ( KiB/s): min=61050, max=65024, per=6.33%, avg=63037.00, stdev=2810.04, samples=2 00:11:05.707 iops : min= 476, max= 508, avg=492.00, stdev=22.63, samples=2 00:11:05.707 lat (msec) : 10=44.23%, 20=3.98%, 50=6.26%, 100=45.53% 00:11:05.707 cpu : usr=0.68%, sys=2.42%, ctx=858, majf=0, minf=1 00:11:05.707 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=96.9%, >=64=0.0% 00:11:05.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.707 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:11:05.707 issued rwts: total=487,519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.707 latency : target=0, window=0, percentile=100.00%, depth=32 00:11:05.707 00:11:05.707 Run status group 0 (all jobs): 00:11:05.707 READ: bw=949MiB/s (995MB/s), 53.8MiB/s-67.4MiB/s (56.4MB/s-70.7MB/s), io=1000MiB (1048MB), run=1026-1053msec 00:11:05.707 WRITE: bw=972MiB/s (1020MB/s), 55.9MiB/s-64.3MiB/s (58.6MB/s-67.4MB/s), io=1024MiB (1074MB), run=1026-1053msec 00:11:05.707 00:11:05.707 Disk stats (read/write): 00:11:05.707 sda: ios=480/440, merge=0/0, ticks=3542/23362, in_queue=26905, util=72.42% 00:11:05.707 sdb: ios=433/423, merge=0/0, ticks=3194/23171, in_queue=26365, util=71.99% 00:11:05.707 sdc: ios=459/386, merge=0/0, ticks=3911/23166, in_queue=27078, util=74.87% 00:11:05.707 sdd: ios=436/428, merge=0/0, ticks=3557/22655, in_queue=26212, util=72.95% 00:11:05.707 sde: ios=432/420, merge=0/0, ticks=3639/22890, in_queue=26530, util=76.38% 00:11:05.707 sdf: ios=483/443, merge=0/0, ticks=3794/23262, in_queue=27057, util=77.97% 00:11:05.707 sdh: ios=398/373, merge=0/0, ticks=3608/22815, in_queue=26424, util=78.06% 00:11:05.707 sdg: ios=404/423, merge=0/0, ticks=3289/23079, in_queue=26369, util=79.04% 00:11:05.707 sdi: ios=397/419, merge=0/0, ticks=3183/23033, in_queue=26217, util=79.05% 00:11:05.707 sdj: ios=492/439, merge=0/0, ticks=3887/23284, in_queue=27171, util=83.32% 00:11:05.707 sdk: ios=430/388, merge=0/0, ticks=3845/23114, in_queue=26960, util=83.59% 00:11:05.707 sdl: ios=450/428, merge=0/0, ticks=3726/23064, in_queue=26790, util=83.91% 00:11:05.707 sdm: ios=498/425, merge=0/0, ticks=4138/22474, in_queue=26613, util=83.75% 00:11:05.708 sdn: ios=460/427, merge=0/0, ticks=3770/22742, in_queue=26513, util=84.63% 00:11:05.708 sdp: ios=389/379, merge=0/0, ticks=3537/23304, in_queue=26841, util=87.79% 00:11:05.708 sdo: ios=420/418, merge=0/0, ticks=3468/22985, in_queue=26454, util=86.16% 00:11:05.708 [2024-07-24 19:49:34.218308] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.708 [2024-07-24 19:49:34.220445] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.708 [2024-07-24 19:49:34.223494] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:05.708 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@82 -- # iscsicleanup 00:11:05.708 Cleaning up iSCSI connection 00:11:05.708 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:11:05.708 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:11:06.276 Logging out of session [sid: 14, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 15, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 16, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 17, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 18, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 19, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 20, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 21, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 22, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 23, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 24, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 25, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 26, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 27, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 28, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:11:06.276 Logging out of session [sid: 29, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:11:06.276 Logout of [sid: 14, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:11:06.276 Logout of [sid: 15, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:06.276 Logout of [sid: 16, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:11:06.276 Logout of [sid: 17, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:06.276 Logout of [sid: 18, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:11:06.276 Logout of [sid: 19, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:11:06.276 Logout of [sid: 20, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:11:06.276 Logout of [sid: 21, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:11:06.276 Logout of [sid: 22, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:11:06.277 Logout of [sid: 23, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:11:06.277 Logout of [sid: 24, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:11:06.277 Logout of [sid: 25, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:11:06.277 Logout of [sid: 26, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:11:06.277 Logout of [sid: 27, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:11:06.277 Logout of [sid: 28, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:11:06.277 Logout of [sid: 29, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@985 -- # rm -rf 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@84 -- # RPCS= 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # seq 0 15 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target0\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc0\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target1\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc1\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target2\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc2\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target3\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc3\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target4\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc4\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target5\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc5\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target6\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc6\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target7\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc7\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target8\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc8\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target9\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc9\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target10\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc10\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target11\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc11\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target12\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc12\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target13\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc13\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target14\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc14\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target15\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc15\n' 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.277 19:49:34 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # echo -e iscsi_delete_target_node 'iqn.2016-06.io.spdk:Target0\nbdev_malloc_delete' 'Malloc0\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target1\nbdev_malloc_delete' 'Malloc1\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target2\nbdev_malloc_delete' 'Malloc2\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target3\nbdev_malloc_delete' 'Malloc3\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target4\nbdev_malloc_delete' 'Malloc4\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target5\nbdev_malloc_delete' 'Malloc5\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target6\nbdev_malloc_delete' 'Malloc6\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target7\nbdev_malloc_delete' 'Malloc7\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target8\nbdev_malloc_delete' 'Malloc8\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target9\nbdev_malloc_delete' 'Malloc9\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target10\nbdev_malloc_delete' 'Malloc10\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target11\nbdev_malloc_delete' 'Malloc11\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target12\nbdev_malloc_delete' 'Malloc12\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target13\nbdev_malloc_delete' 'Malloc13\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target14\nbdev_malloc_delete' 'Malloc14\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target15\nbdev_malloc_delete' 'Malloc15\n' 00:11:07.212 19:49:35 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@92 -- # trap 'delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:07.212 19:49:35 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@94 -- # killprocess 70457 00:11:07.212 19:49:35 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@950 -- # '[' -z 70457 ']' 00:11:07.212 19:49:35 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # kill -0 70457 00:11:07.212 19:49:35 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@955 -- # uname 00:11:07.212 19:49:35 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.212 19:49:35 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70457 00:11:07.212 19:49:35 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:07.212 19:49:35 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:07.212 19:49:35 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70457' 00:11:07.212 killing process with pid 70457 00:11:07.212 19:49:35 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@969 -- # kill 70457 00:11:07.212 19:49:35 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@974 -- # wait 70457 00:11:08.185 19:49:36 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@95 -- # killprocess 70492 00:11:08.185 19:49:36 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@950 -- # '[' -z 70492 ']' 00:11:08.185 19:49:36 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # kill -0 70492 00:11:08.185 19:49:36 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@955 -- # uname 00:11:08.185 19:49:36 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:08.185 19:49:36 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70492 00:11:08.185 19:49:36 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@956 -- # process_name=spdk_trace_reco 00:11:08.185 19:49:36 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@960 -- # '[' spdk_trace_reco = sudo ']' 00:11:08.185 killing process with pid 70492 00:11:08.185 19:49:36 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70492' 00:11:08.185 19:49:36 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@969 -- # kill 70492 00:11:08.185 19:49:36 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@974 -- # wait 70492 00:11:08.185 19:49:36 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@96 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace -f ./tmp-trace/record.trace 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # grep 'trace entries for lcore' ./tmp-trace/record.notice 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # cut -d ' ' -f 2 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # record_num='143798 00:11:20.403 144912 00:11:20.403 147166 00:11:20.403 132612' 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # grep 'Trace Size of lcore' ./tmp-trace/trace.log 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # cut -d ' ' -f 6 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # trace_tool_num='143798 00:11:20.403 144912 00:11:20.403 147166 00:11:20.403 132612' 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@105 -- # delete_tmp_files 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@19 -- # rm -rf ./tmp-trace 00:11:20.403 entries numbers from trace record are: 143798 144912 147166 132612 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@107 -- # echo 'entries numbers from trace record are:' 143798 144912 147166 132612 00:11:20.403 entries numbers from trace tool are: 143798 144912 147166 132612 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@108 -- # echo 'entries numbers from trace tool are:' 143798 144912 147166 132612 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@110 -- # arr_record_num=($record_num) 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@111 -- # arr_trace_tool_num=($trace_tool_num) 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@112 -- # len_arr_record_num=4 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@113 -- # len_arr_trace_tool_num=4 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@116 -- # '[' 4 -ne 4 ']' 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # seq 0 3 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 143798 -le 4096 ']' 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 143798 -ne 143798 ']' 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 144912 -le 4096 ']' 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 144912 -ne 144912 ']' 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 147166 -le 4096 ']' 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 147166 -ne 147166 ']' 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 132612 -le 4096 ']' 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 132612 -ne 132612 ']' 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@135 -- # trap - SIGINT SIGTERM EXIT 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@136 -- # iscsitestfini 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:20.403 00:11:20.403 real 0m19.635s 00:11:20.403 user 0m44.951s 00:11:20.403 sys 0m4.955s 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.403 19:49:47 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:11:20.403 ************************************ 00:11:20.403 END TEST iscsi_tgt_trace_record 00:11:20.403 ************************************ 00:11:20.403 19:49:47 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@41 -- # run_test iscsi_tgt_login_redirection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:11:20.403 19:49:47 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:20.403 19:49:47 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.403 19:49:47 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:20.403 ************************************ 00:11:20.403 START TEST iscsi_tgt_login_redirection 00:11:20.403 ************************************ 00:11:20.404 19:49:47 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:11:20.404 * Looking for test storage... 00:11:20.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@12 -- # iscsitestinit 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@14 -- # NULL_BDEV_SIZE=64 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@15 -- # NULL_BLOCK_SIZE=512 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@17 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@20 -- # rpc_addr1=/var/tmp/spdk0.sock 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@21 -- # rpc_addr2=/var/tmp/spdk1.sock 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@25 -- # timing_enter start_iscsi_tgts 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@28 -- # pid1=71322 00:11:20.404 Process pid: 71322 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@29 -- # echo 'Process pid: 71322' 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@27 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -i 0 -m 0x1 --wait-for-rpc 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@32 -- # pid2=71323 00:11:20.404 Process pid: 71323 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@33 -- # echo 'Process pid: 71323' 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@35 -- # trap 'killprocess $pid1; killprocess $pid2; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -i 1 -m 0x2 --wait-for-rpc 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@37 -- # waitforlisten 71322 /var/tmp/spdk0.sock 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@831 -- # '[' -z 71322 ']' 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk0.sock 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:20.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:20.404 19:49:48 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:11:20.404 [2024-07-24 19:49:48.079742] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:11:20.404 [2024-07-24 19:49:48.079863] [ DPDK EAL parameters: iscsi -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.404 [2024-07-24 19:49:48.094707] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:11:20.404 [2024-07-24 19:49:48.094820] [ DPDK EAL parameters: iscsi -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:20.404 [2024-07-24 19:49:48.224120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.404 [2024-07-24 19:49:48.243334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.404 [2024-07-24 19:49:48.380056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.404 [2024-07-24 19:49:48.416292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.404 19:49:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.404 19:49:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@864 -- # return 0 00:11:20.404 19:49:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_set_options -w 0 -o 30 -a 16 00:11:20.662 19:49:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock framework_start_init 00:11:21.228 [2024-07-24 19:49:49.677886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:21.486 iscsi_tgt_1 is listening. 00:11:21.486 19:49:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@40 -- # echo 'iscsi_tgt_1 is listening.' 00:11:21.486 19:49:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@42 -- # waitforlisten 71323 /var/tmp/spdk1.sock 00:11:21.486 19:49:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@831 -- # '[' -z 71323 ']' 00:11:21.486 19:49:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk1.sock 00:11:21.486 19:49:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:11:21.486 19:49:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:11:21.486 19:49:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.486 19:49:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:11:21.744 19:49:50 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.744 19:49:50 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@864 -- # return 0 00:11:21.744 19:49:50 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_set_options -w 0 -o 30 -a 16 00:11:21.744 19:49:50 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock framework_start_init 00:11:22.002 [2024-07-24 19:49:50.651919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:22.568 iscsi_tgt_2 is listening. 00:11:22.568 19:49:50 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@45 -- # echo 'iscsi_tgt_2 is listening.' 00:11:22.568 19:49:50 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@47 -- # timing_exit start_iscsi_tgts 00:11:22.568 19:49:50 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:22.568 19:49:50 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:11:22.568 19:49:50 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:11:22.568 19:49:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 10.0.0.1:3260 00:11:22.824 19:49:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock bdev_null_create Null0 64 512 00:11:23.082 Null0 00:11:23.082 19:49:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:11:23.341 19:49:51 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:11:23.615 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 10.0.0.3:3260 -p 00:11:23.873 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock bdev_null_create Null0 64 512 00:11:24.133 Null0 00:11:24.133 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:11:24.391 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@67 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:11:24.391 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:11:24.391 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@68 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:11:24.391 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:24.391 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:24.391 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@69 -- # waitforiscsidevices 1 00:11:24.391 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@116 -- # local num=1 00:11:24.391 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:24.391 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:24.392 [2024-07-24 19:49:52.868134] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:24.392 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:24.392 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:24.392 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # n=1 00:11:24.392 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:11:24.392 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@123 -- # return 0 00:11:24.392 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@72 -- # fiopid=71425 00:11:24.392 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t randrw -r 15 00:11:24.392 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@73 -- # echo 'FIO pid: 71425' 00:11:24.392 FIO pid: 71425 00:11:24.392 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@75 -- # trap 'iscsicleanup; killprocess $pid1; killprocess $pid2; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:24.392 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:11:24.392 19:49:52 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # jq length 00:11:24.392 [global] 00:11:24.392 thread=1 00:11:24.392 invalidate=1 00:11:24.392 rw=randrw 00:11:24.392 time_based=1 00:11:24.392 runtime=15 00:11:24.392 ioengine=libaio 00:11:24.392 direct=1 00:11:24.392 bs=512 00:11:24.392 iodepth=1 00:11:24.392 norandommap=1 00:11:24.392 numjobs=1 00:11:24.392 00:11:24.392 [job0] 00:11:24.392 filename=/dev/sda 00:11:24.392 queue_depth set to 113 (sda) 00:11:24.650 job0: (g=0): rw=randrw, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:11:24.650 fio-3.35 00:11:24.650 Starting 1 thread 00:11:24.650 [2024-07-24 19:49:53.061055] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:24.650 19:49:53 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # '[' 1 = 1 ']' 00:11:24.650 19:49:53 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # jq length 00:11:24.650 19:49:53 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:11:24.912 19:49:53 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # '[' 0 = 0 ']' 00:11:24.912 19:49:53 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 -a 10.0.0.3 -p 3260 00:11:25.172 19:49:53 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:11:25.431 19:49:53 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@85 -- # sleep 5 00:11:30.756 19:49:58 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:11:30.756 19:49:58 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # jq length 00:11:30.756 19:49:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # '[' 0 = 0 ']' 00:11:30.756 19:49:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:11:30.756 19:49:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # jq length 00:11:30.756 19:49:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # '[' 1 = 1 ']' 00:11:30.756 19:49:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 00:11:31.015 19:49:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:11:31.273 19:49:59 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@93 -- # sleep 5 00:11:36.608 19:50:04 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:11:36.608 19:50:04 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # jq length 00:11:36.608 19:50:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # '[' 1 = 1 ']' 00:11:36.608 19:50:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # jq length 00:11:36.608 19:50:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:11:36.868 19:50:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # '[' 0 = 0 ']' 00:11:36.868 19:50:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@98 -- # wait 71425 00:11:39.512 [2024-07-24 19:50:08.172593] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:39.773 00:11:39.773 job0: (groupid=0, jobs=1): err= 0: pid=71458: Wed Jul 24 19:50:08 2024 00:11:39.773 read: IOPS=5478, BW=2739KiB/s (2805kB/s)(40.1MiB/15001msec) 00:11:39.773 slat (usec): min=3, max=246, avg= 7.63, stdev= 2.52 00:11:39.773 clat (usec): min=5, max=2686, avg=56.86, stdev=13.26 00:11:39.773 lat (usec): min=48, max=2700, avg=64.48, stdev=13.78 00:11:39.773 clat percentiles (usec): 00:11:39.773 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 50], 20.00th=[ 52], 00:11:39.773 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:11:39.773 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 65], 95.00th=[ 71], 00:11:39.773 | 99.00th=[ 82], 99.50th=[ 88], 99.90th=[ 122], 99.95th=[ 190], 00:11:39.773 | 99.99th=[ 330] 00:11:39.773 bw ( KiB/s): min= 1385, max= 4180, per=100.00%, avg=3417.78, stdev=811.88, samples=23 00:11:39.773 iops : min= 2770, max= 8360, avg=6835.57, stdev=1623.76, samples=23 00:11:39.773 write: IOPS=5456, BW=2728KiB/s (2794kB/s)(40.0MiB/15001msec); 0 zone resets 00:11:39.773 slat (usec): min=4, max=260, avg= 7.55, stdev= 2.91 00:11:39.773 clat (usec): min=2, max=2008.2k, avg=109.72, stdev=9921.74 00:11:39.773 lat (usec): min=51, max=2008.2k, avg=117.27, stdev=9921.75 00:11:39.773 clat percentiles (usec): 00:11:39.773 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:11:39.773 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 61], 00:11:39.773 | 70.00th=[ 62], 80.00th=[ 65], 90.00th=[ 70], 95.00th=[ 75], 00:11:39.773 | 99.00th=[ 86], 99.50th=[ 93], 99.90th=[ 133], 99.95th=[ 190], 00:11:39.773 | 99.99th=[ 408] 00:11:39.773 bw ( KiB/s): min= 1364, max= 4146, per=100.00%, avg=3402.00, stdev=816.92, samples=23 00:11:39.773 iops : min= 2728, max= 8292, avg=6804.00, stdev=1633.83, samples=23 00:11:39.773 lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=5.04%, 100=94.72% 00:11:39.773 lat (usec) : 250=0.21%, 500=0.02%, 750=0.01%, 1000=0.01% 00:11:39.773 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:11:39.773 cpu : usr=3.39%, sys=11.39%, ctx=164119, majf=0, minf=1 00:11:39.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:39.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.774 issued rwts: total=82181,81859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:39.774 00:11:39.774 Run status group 0 (all jobs): 00:11:39.774 READ: bw=2739KiB/s (2805kB/s), 2739KiB/s-2739KiB/s (2805kB/s-2805kB/s), io=40.1MiB (42.1MB), run=15001-15001msec 00:11:39.774 WRITE: bw=2728KiB/s (2794kB/s), 2728KiB/s-2728KiB/s (2794kB/s-2794kB/s), io=40.0MiB (41.9MB), run=15001-15001msec 00:11:39.774 00:11:39.774 Disk stats (read/write): 00:11:39.774 sda: ios=81388/81031, merge=0/0, ticks=4583/8875, in_queue=13458, util=99.48% 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@100 -- # trap - SIGINT SIGTERM EXIT 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@102 -- # iscsicleanup 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:11:39.774 Cleaning up iSCSI connection 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:11:39.774 Logging out of session [sid: 30, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:11:39.774 Logout of [sid: 30, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@985 -- # rm -rf 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@103 -- # killprocess 71322 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@950 -- # '[' -z 71322 ']' 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # kill -0 71322 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@955 -- # uname 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71322 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71322' 00:11:39.774 killing process with pid 71322 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@969 -- # kill 71322 00:11:39.774 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@974 -- # wait 71322 00:11:40.342 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@104 -- # killprocess 71323 00:11:40.342 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@950 -- # '[' -z 71323 ']' 00:11:40.342 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # kill -0 71323 00:11:40.342 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@955 -- # uname 00:11:40.342 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:40.342 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71323 00:11:40.342 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:40.342 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:40.342 killing process with pid 71323 00:11:40.342 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71323' 00:11:40.342 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@969 -- # kill 71323 00:11:40.342 19:50:08 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@974 -- # wait 71323 00:11:40.909 19:50:09 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@105 -- # iscsitestfini 00:11:40.909 19:50:09 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:40.909 00:11:40.909 real 0m21.589s 00:11:40.909 user 0m42.423s 00:11:40.909 sys 0m7.036s 00:11:40.909 19:50:09 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.909 19:50:09 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:11:40.909 ************************************ 00:11:40.909 END TEST iscsi_tgt_login_redirection 00:11:40.909 ************************************ 00:11:40.909 19:50:09 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@42 -- # run_test iscsi_tgt_digests /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:11:40.909 19:50:09 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:40.909 19:50:09 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.909 19:50:09 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:40.909 ************************************ 00:11:40.909 START TEST iscsi_tgt_digests 00:11:40.909 ************************************ 00:11:40.909 19:50:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:11:41.168 * Looking for test storage... 00:11:41.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@11 -- # iscsitestinit 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@52 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@54 -- # timing_enter start_iscsi_tgt 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@57 -- # pid=71716 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@58 -- # echo 'Process pid: 71716' 00:11:41.168 Process pid: 71716 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@60 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@56 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@62 -- # waitforlisten 71716 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@831 -- # '[' -z 71716 ']' 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:41.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:41.168 19:50:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:41.168 [2024-07-24 19:50:09.744234] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:11:41.168 [2024-07-24 19:50:09.744339] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71716 ] 00:11:41.427 [2024-07-24 19:50:09.889287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.427 [2024-07-24 19:50:10.052667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.427 [2024-07-24 19:50:10.052718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.427 [2024-07-24 19:50:10.052928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.427 [2024-07-24 19:50:10.052941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.362 19:50:10 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:42.362 19:50:10 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@864 -- # return 0 00:11:42.362 19:50:10 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@63 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:11:42.362 19:50:10 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.362 19:50:10 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:42.362 19:50:10 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.362 19:50:10 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@64 -- # rpc_cmd framework_start_init 00:11:42.362 19:50:10 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.362 19:50:10 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:42.362 [2024-07-24 19:50:10.859697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:42.621 iscsi_tgt is listening. Running tests... 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@65 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@67 -- # timing_exit start_iscsi_tgt 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@69 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@70 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@71 -- # rpc_cmd bdev_malloc_create 64 512 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:42.621 Malloc0 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@76 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.621 19:50:11 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@77 -- # sleep 1 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@79 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:11:43.997 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.DataDigest' -v None 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # true 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # DataDigestAbility='iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:11:43.997 iscsiadm: Could not execute operation on all records: invalid parameter' 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@84 -- # '[' 'iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:11:43.997 iscsiadm: Could not execute operation on all records: invalid parameterx' '!=' x ']' 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@85 -- # run_test iscsi_tgt_digest iscsi_header_digest_test 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:43.997 ************************************ 00:11:43.997 START TEST iscsi_tgt_digest 00:11:43.997 ************************************ 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1125 -- # iscsi_header_digest_test 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@27 -- # node_login_fio_logout 'HeaderDigest -v CRC32C' 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:11:43.997 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:43.997 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:43.997 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:43.998 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:43.998 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:43.998 [2024-07-24 19:50:12.329886] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:43.998 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:11:43.998 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:11:43.998 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:11:43.998 19:50:12 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:11:43.998 [global] 00:11:43.998 thread=1 00:11:43.998 invalidate=1 00:11:43.998 rw=write 00:11:43.998 time_based=1 00:11:43.998 runtime=2 00:11:43.998 ioengine=libaio 00:11:43.998 direct=1 00:11:43.998 bs=512 00:11:43.998 iodepth=1 00:11:43.998 norandommap=1 00:11:43.998 numjobs=1 00:11:43.998 00:11:43.998 [job0] 00:11:43.998 filename=/dev/sda 00:11:43.998 queue_depth set to 113 (sda) 00:11:43.998 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:11:43.998 fio-3.35 00:11:43.998 Starting 1 thread 00:11:43.998 [2024-07-24 19:50:12.520186] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:46.526 [2024-07-24 19:50:14.630239] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:46.526 00:11:46.526 job0: (groupid=0, jobs=1): err= 0: pid=71820: Wed Jul 24 19:50:14 2024 00:11:46.526 write: IOPS=10.5k, BW=5227KiB/s (5353kB/s)(10.2MiB/2001msec); 0 zone resets 00:11:46.526 slat (nsec): min=4675, max=68636, avg=7107.44, stdev=1656.72 00:11:46.526 clat (usec): min=63, max=2876, avg=87.98, stdev=35.00 00:11:46.526 lat (usec): min=78, max=2883, avg=95.09, stdev=35.11 00:11:46.526 clat percentiles (usec): 00:11:46.526 | 1.00th=[ 76], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 81], 00:11:46.526 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 89], 00:11:46.526 | 70.00th=[ 90], 80.00th=[ 92], 90.00th=[ 96], 95.00th=[ 101], 00:11:46.526 | 99.00th=[ 112], 99.50th=[ 117], 99.90th=[ 265], 99.95th=[ 523], 00:11:46.526 | 99.99th=[ 2376] 00:11:46.526 bw ( KiB/s): min= 5205, max= 5354, per=100.00%, avg=5266.00, stdev=78.08, samples=3 00:11:46.526 iops : min=10410, max=10708, avg=10532.00, stdev=156.17, samples=3 00:11:46.526 lat (usec) : 100=94.33%, 250=5.57%, 500=0.04%, 750=0.03%, 1000=0.02% 00:11:46.526 lat (msec) : 4=0.01% 00:11:46.526 cpu : usr=2.60%, sys=10.75%, ctx=20923, majf=0, minf=1 00:11:46.526 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.526 issued rwts: total=0,20920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.526 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.526 00:11:46.526 Run status group 0 (all jobs): 00:11:46.526 WRITE: bw=5227KiB/s (5353kB/s), 5227KiB/s-5227KiB/s (5353kB/s-5353kB/s), io=10.2MiB (10.7MB), run=2001-2001msec 00:11:46.526 00:11:46.526 Disk stats (read/write): 00:11:46.526 sda: ios=48/19729, merge=0/0, ticks=8/1715, in_queue=1724, util=95.27% 00:11:46.526 19:50:14 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:11:46.526 [global] 00:11:46.526 thread=1 00:11:46.526 invalidate=1 00:11:46.526 rw=read 00:11:46.526 time_based=1 00:11:46.526 runtime=2 00:11:46.526 ioengine=libaio 00:11:46.526 direct=1 00:11:46.526 bs=512 00:11:46.526 iodepth=1 00:11:46.526 norandommap=1 00:11:46.526 numjobs=1 00:11:46.526 00:11:46.526 [job0] 00:11:46.526 filename=/dev/sda 00:11:46.526 queue_depth set to 113 (sda) 00:11:46.526 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:11:46.526 fio-3.35 00:11:46.526 Starting 1 thread 00:11:48.429 00:11:48.429 job0: (groupid=0, jobs=1): err= 0: pid=71873: Wed Jul 24 19:50:16 2024 00:11:48.429 read: IOPS=13.1k, BW=6530KiB/s (6687kB/s)(12.8MiB/2000msec) 00:11:48.429 slat (usec): min=3, max=102, avg= 5.32, stdev= 2.39 00:11:48.429 clat (usec): min=47, max=750, avg=70.56, stdev=13.60 00:11:48.429 lat (usec): min=58, max=754, avg=75.88, stdev=14.99 00:11:48.429 clat percentiles (usec): 00:11:48.429 | 1.00th=[ 58], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 62], 00:11:48.429 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 71], 00:11:48.429 | 70.00th=[ 73], 80.00th=[ 78], 90.00th=[ 85], 95.00th=[ 97], 00:11:48.429 | 99.00th=[ 114], 99.50th=[ 122], 99.90th=[ 157], 99.95th=[ 186], 00:11:48.429 | 99.99th=[ 322] 00:11:48.429 bw ( KiB/s): min= 5150, max= 7169, per=98.22%, avg=6414.33, stdev=1101.78, samples=3 00:11:48.429 iops : min=10301, max=14338, avg=12829.00, stdev=2202.98, samples=3 00:11:48.429 lat (usec) : 50=0.02%, 100=95.90%, 250=4.06%, 500=0.02%, 1000=0.01% 00:11:48.429 cpu : usr=4.75%, sys=10.40%, ctx=26199, majf=0, minf=1 00:11:48.429 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:48.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.429 issued rwts: total=26121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.429 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:48.429 00:11:48.429 Run status group 0 (all jobs): 00:11:48.429 READ: bw=6530KiB/s (6687kB/s), 6530KiB/s-6530KiB/s (6687kB/s-6687kB/s), io=12.8MiB (13.4MB), run=2000-2000msec 00:11:48.429 00:11:48.429 Disk stats (read/write): 00:11:48.429 sda: ios=24609/0, merge=0/0, ticks=1656/0, in_queue=1655, util=95.07% 00:11:48.429 19:50:16 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:11:48.429 Logging out of session [sid: 31, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:48.429 Logout of [sid: 31, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:48.429 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:11:48.429 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:11:48.429 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:48.429 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:48.429 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:48.429 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:48.429 iscsiadm: No active sessions. 00:11:48.430 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:11:48.430 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:11:48.430 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:11:48.430 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:11:48.430 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@31 -- # node_login_fio_logout 'HeaderDigest -v CRC32C,None' 00:11:48.430 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:11:48.430 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C,None 00:11:48.430 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:11:48.687 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:48.688 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:48.688 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:11:48.688 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:11:48.688 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:48.688 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:48.688 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:48.688 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:48.688 [2024-07-24 19:50:17.107308] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:48.688 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:11:48.688 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:11:48.688 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:11:48.688 19:50:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:11:48.688 [global] 00:11:48.688 thread=1 00:11:48.688 invalidate=1 00:11:48.688 rw=write 00:11:48.688 time_based=1 00:11:48.688 runtime=2 00:11:48.688 ioengine=libaio 00:11:48.688 direct=1 00:11:48.688 bs=512 00:11:48.688 iodepth=1 00:11:48.688 norandommap=1 00:11:48.688 numjobs=1 00:11:48.688 00:11:48.688 [job0] 00:11:48.688 filename=/dev/sda 00:11:48.688 queue_depth set to 113 (sda) 00:11:48.688 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:11:48.688 fio-3.35 00:11:48.688 Starting 1 thread 00:11:48.688 [2024-07-24 19:50:17.297702] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:51.219 [2024-07-24 19:50:19.407745] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:51.219 00:11:51.219 job0: (groupid=0, jobs=1): err= 0: pid=71944: Wed Jul 24 19:50:19 2024 00:11:51.219 write: IOPS=11.6k, BW=5805KiB/s (5944kB/s)(11.3MiB/2001msec); 0 zone resets 00:11:51.219 slat (nsec): min=4518, max=53070, avg=7465.63, stdev=1688.77 00:11:51.219 clat (usec): min=59, max=2685, avg=78.15, stdev=20.86 00:11:51.219 lat (usec): min=67, max=2692, avg=85.61, stdev=21.02 00:11:51.219 clat percentiles (usec): 00:11:51.219 | 1.00th=[ 66], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 73], 00:11:51.219 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 76], 60.00th=[ 77], 00:11:51.219 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 89], 95.00th=[ 95], 00:11:51.219 | 99.00th=[ 111], 99.50th=[ 143], 99.90th=[ 186], 99.95th=[ 221], 00:11:51.219 | 99.99th=[ 619] 00:11:51.219 bw ( KiB/s): min= 5881, max= 6029, per=100.00%, avg=5953.00, stdev=74.08, samples=3 00:11:51.219 iops : min=11762, max=12058, avg=11906.00, stdev=148.16, samples=3 00:11:51.219 lat (usec) : 100=97.52%, 250=2.45%, 500=0.03%, 750=0.01% 00:11:51.219 lat (msec) : 4=0.01% 00:11:51.219 cpu : usr=1.85%, sys=12.50%, ctx=23230, majf=0, minf=1 00:11:51.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.220 issued rwts: total=0,23230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:51.220 00:11:51.220 Run status group 0 (all jobs): 00:11:51.220 WRITE: bw=5805KiB/s (5944kB/s), 5805KiB/s-5805KiB/s (5944kB/s-5944kB/s), io=11.3MiB (11.9MB), run=2001-2001msec 00:11:51.220 00:11:51.220 Disk stats (read/write): 00:11:51.220 sda: ios=48/21942, merge=0/0, ticks=8/1724, in_queue=1732, util=95.42% 00:11:51.220 19:50:19 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:11:51.220 [global] 00:11:51.220 thread=1 00:11:51.220 invalidate=1 00:11:51.220 rw=read 00:11:51.220 time_based=1 00:11:51.220 runtime=2 00:11:51.220 ioengine=libaio 00:11:51.220 direct=1 00:11:51.220 bs=512 00:11:51.220 iodepth=1 00:11:51.220 norandommap=1 00:11:51.220 numjobs=1 00:11:51.220 00:11:51.220 [job0] 00:11:51.220 filename=/dev/sda 00:11:51.220 queue_depth set to 113 (sda) 00:11:51.220 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:11:51.220 fio-3.35 00:11:51.220 Starting 1 thread 00:11:53.121 00:11:53.121 job0: (groupid=0, jobs=1): err= 0: pid=71997: Wed Jul 24 19:50:21 2024 00:11:53.121 read: IOPS=13.7k, BW=6862KiB/s (7027kB/s)(13.4MiB/2001msec) 00:11:53.121 slat (nsec): min=4162, max=82733, avg=5712.93, stdev=2158.01 00:11:53.121 clat (nsec): min=1584, max=2524.8k, avg=66485.55, stdev=21437.68 00:11:53.121 lat (usec): min=55, max=2534, avg=72.20, stdev=21.63 00:11:53.121 clat percentiles (usec): 00:11:53.121 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 59], 00:11:53.121 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 68], 00:11:53.121 | 70.00th=[ 70], 80.00th=[ 71], 90.00th=[ 75], 95.00th=[ 80], 00:11:53.121 | 99.00th=[ 90], 99.50th=[ 98], 99.90th=[ 210], 99.95th=[ 310], 00:11:53.121 | 99.99th=[ 938] 00:11:53.121 bw ( KiB/s): min= 6618, max= 7267, per=100.00%, avg=6978.67, stdev=330.49, samples=3 00:11:53.121 iops : min=13236, max=14534, avg=13957.33, stdev=660.98, samples=3 00:11:53.121 lat (usec) : 2=0.01%, 20=0.01%, 50=0.52%, 100=99.08%, 250=0.32% 00:11:53.121 lat (usec) : 500=0.03%, 750=0.03%, 1000=0.01% 00:11:53.121 lat (msec) : 2=0.01%, 4=0.01% 00:11:53.121 cpu : usr=4.50%, sys=11.85%, ctx=27684, majf=0, minf=1 00:11:53.121 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:53.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.121 issued rwts: total=27461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.121 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:53.121 00:11:53.121 Run status group 0 (all jobs): 00:11:53.121 READ: bw=6862KiB/s (7027kB/s), 6862KiB/s-6862KiB/s (7027kB/s-7027kB/s), io=13.4MiB (14.1MB), run=2001-2001msec 00:11:53.121 00:11:53.121 Disk stats (read/write): 00:11:53.121 sda: ios=26057/0, merge=0/0, ticks=1644/0, in_queue=1644, util=95.07% 00:11:53.121 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:11:53.380 Logging out of session [sid: 32, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:53.380 Logout of [sid: 32, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:11:53.380 iscsiadm: No active sessions. 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:11:53.380 00:11:53.380 real 0m9.579s 00:11:53.380 user 0m0.811s 00:11:53.380 sys 0m1.285s 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@10 -- # set +x 00:11:53.380 ************************************ 00:11:53.380 END TEST iscsi_tgt_digest 00:11:53.380 ************************************ 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@92 -- # iscsicleanup 00:11:53.380 Cleaning up iSCSI connection 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:11:53.380 iscsiadm: No matching sessions found 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # true 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@985 -- # rm -rf 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@93 -- # killprocess 71716 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@950 -- # '[' -z 71716 ']' 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # kill -0 71716 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@955 -- # uname 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71716 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.380 killing process with pid 71716 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71716' 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@969 -- # kill 71716 00:11:53.380 19:50:21 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@974 -- # wait 71716 00:11:53.946 19:50:22 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@94 -- # iscsitestfini 00:11:53.946 19:50:22 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:53.946 00:11:53.946 real 0m13.019s 00:11:53.946 user 0m47.370s 00:11:53.946 sys 0m4.063s 00:11:53.946 19:50:22 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.946 ************************************ 00:11:53.946 END TEST iscsi_tgt_digests 00:11:53.946 ************************************ 00:11:53.946 19:50:22 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:11:54.204 19:50:22 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@43 -- # run_test iscsi_tgt_fuzz /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:11:54.204 19:50:22 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:54.204 19:50:22 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.204 19:50:22 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:54.204 ************************************ 00:11:54.204 START TEST iscsi_tgt_fuzz 00:11:54.204 ************************************ 00:11:54.204 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:11:54.204 * Looking for test storage... 00:11:54.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/fuzz 00:11:54.204 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:54.204 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:54.204 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:54.204 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:54.204 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:54.204 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:54.204 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:54.204 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:54.204 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:54.204 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@11 -- # iscsitestinit 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@13 -- # '[' -z 10.0.0.1 ']' 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@18 -- # '[' -z 10.0.0.2 ']' 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@23 -- # timing_enter iscsi_fuzz_test 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@25 -- # MALLOC_BDEV_SIZE=64 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@26 -- # MALLOC_BLOCK_SIZE=4096 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@28 -- # TEST_TIMEOUT=1200 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@31 -- # for i in "$@" 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@32 -- # case "$i" in 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@34 -- # TEST_TIMEOUT=30 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@39 -- # timing_enter start_iscsi_tgt 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@42 -- # iscsipid=72103 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --disable-cpumask-locks --wait-for-rpc 00:11:54.205 Process iscsipid: 72103 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@43 -- # echo 'Process iscsipid: 72103' 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@45 -- # trap 'killprocess $iscsipid; exit 1' SIGINT SIGTERM EXIT 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@47 -- # waitforlisten 72103 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@831 -- # '[' -z 72103 ']' 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.205 19:50:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:55.580 19:50:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.580 19:50:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@864 -- # return 0 00:11:55.580 19:50:23 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@49 -- # rpc_cmd iscsi_set_options -o 60 -a 16 00:11:55.580 19:50:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.580 19:50:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:55.580 19:50:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.580 19:50:23 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@50 -- # rpc_cmd framework_start_init 00:11:55.580 19:50:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.580 19:50:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.580 iscsi_tgt is listening. Running tests... 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@51 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@52 -- # timing_exit start_iscsi_tgt 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@54 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@55 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@56 -- # rpc_cmd bdev_malloc_create 64 4096 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.580 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:55.839 Malloc0 00:11:55.839 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.839 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@57 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:11:55.839 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.839 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:55.839 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.839 19:50:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@58 -- # sleep 1 00:11:56.803 19:50:25 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@60 -- # trap 'killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:56.804 19:50:25 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/iscsi_fuzz/iscsi_fuzz -m 0xF0 -T 10.0.0.1 -t 30 00:12:28.919 Fuzzing completed. Shutting down the fuzz application. 00:12:28.919 00:12:28.919 device 0x120ac40 stats: Sent 11518 valid opcode PDUs, 104741 invalid opcode PDUs. 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@64 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:disk1 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@67 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@71 -- # killprocess 72103 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@950 -- # '[' -z 72103 ']' 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # kill -0 72103 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@955 -- # uname 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72103 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.919 killing process with pid 72103 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72103' 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@969 -- # kill 72103 00:12:28.919 19:50:55 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@974 -- # wait 72103 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@73 -- # iscsitestfini 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@75 -- # timing_exit iscsi_fuzz_test 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:28.919 ************************************ 00:12:28.919 END TEST iscsi_tgt_fuzz 00:12:28.919 ************************************ 00:12:28.919 00:12:28.919 real 0m33.925s 00:12:28.919 user 3m9.030s 00:12:28.919 sys 0m15.745s 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:28.919 19:50:56 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@44 -- # run_test iscsi_tgt_multiconnection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:12:28.919 19:50:56 iscsi_tgt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:28.919 19:50:56 iscsi_tgt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.919 19:50:56 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:28.919 ************************************ 00:12:28.919 START TEST iscsi_tgt_multiconnection 00:12:28.919 ************************************ 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:12:28.919 * Looking for test storage... 00:12:28.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@11 -- # iscsitestinit 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@18 -- # CONNECTION_NUMBER=30 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@40 -- # timing_enter start_iscsi_tgt 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@42 -- # iscsipid=72534 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:12:28.919 iSCSI target launched. pid: 72534 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@43 -- # echo 'iSCSI target launched. pid: 72534' 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@44 -- # trap 'remove_backends; iscsicleanup; killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@46 -- # waitforlisten 72534 00:12:28.919 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 72534 ']' 00:12:28.920 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.920 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:28.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.920 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.920 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:28.920 19:50:56 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:12:28.920 [2024-07-24 19:50:56.771685] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:12:28.920 [2024-07-24 19:50:56.771768] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72534 ] 00:12:28.920 [2024-07-24 19:50:56.902000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.920 [2024-07-24 19:50:57.058737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.205 19:50:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.205 19:50:57 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:12:29.205 19:50:57 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 128 00:12:29.463 19:50:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:30.029 [2024-07-24 19:50:58.419988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:30.287 19:50:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:30.288 19:50:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:12:30.546 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@50 -- # timing_exit start_iscsi_tgt 00:12:30.546 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.546 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:12:30.546 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:12:30.804 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:31.062 Creating an iSCSI target node. 00:12:31.062 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@55 -- # echo 'Creating an iSCSI target node.' 00:12:31.062 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs0 -c 1048576 00:12:31.320 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # ls_guid=55f20a5f-8186-4de6-81cf-2e8e26c19375 00:12:31.320 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@59 -- # get_lvs_free_mb 55f20a5f-8186-4de6-81cf-2e8e26c19375 00:12:31.320 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1364 -- # local lvs_uuid=55f20a5f-8186-4de6-81cf-2e8e26c19375 00:12:31.320 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1365 -- # local lvs_info 00:12:31.320 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1366 -- # local fc 00:12:31.320 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1367 -- # local cs 00:12:31.320 19:50:59 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:12:31.578 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:12:31.578 { 00:12:31.578 "uuid": "55f20a5f-8186-4de6-81cf-2e8e26c19375", 00:12:31.578 "name": "lvs0", 00:12:31.578 "base_bdev": "Nvme0n1", 00:12:31.578 "total_data_clusters": 5099, 00:12:31.578 "free_clusters": 5099, 00:12:31.578 "block_size": 4096, 00:12:31.578 "cluster_size": 1048576 00:12:31.578 } 00:12:31.578 ]' 00:12:31.578 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="55f20a5f-8186-4de6-81cf-2e8e26c19375") .free_clusters' 00:12:31.578 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # fc=5099 00:12:31.578 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="55f20a5f-8186-4de6-81cf-2e8e26c19375") .cluster_size' 00:12:31.578 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # cs=1048576 00:12:31.578 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1373 -- # free_mb=5099 00:12:31.578 5099 00:12:31.578 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1374 -- # echo 5099 00:12:31.578 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@60 -- # lvol_bdev_size=169 00:12:31.578 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # seq 1 30 00:12:31.578 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:31.578 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_1 169 00:12:31.835 8fa12143-11c4-4d03-acdb-2e5ce6786ee6 00:12:31.835 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:31.835 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_2 169 00:12:32.093 d8a2c8ed-1ce2-4362-a212-ae7771fa0e65 00:12:32.352 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:32.352 19:51:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_3 169 00:12:32.352 7c81bd68-9f39-4c68-9b60-de43abc21725 00:12:32.611 19:51:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:32.611 19:51:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_4 169 00:12:32.611 45bc263f-cc8a-4d7b-9c50-6670b469e3e1 00:12:32.611 19:51:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:32.611 19:51:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_5 169 00:12:32.869 fa72fa05-5d24-40ba-8f85-c9fca4401a09 00:12:32.869 19:51:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:32.869 19:51:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_6 169 00:12:33.128 17babbb5-474b-42d9-aad4-565f254fe2ef 00:12:33.128 19:51:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:33.128 19:51:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_7 169 00:12:33.386 42134c3f-e1af-4878-b286-590e4363ca3d 00:12:33.386 19:51:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:33.386 19:51:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_8 169 00:12:33.645 e7309cca-4bda-4035-a83f-27fa38705f35 00:12:33.645 19:51:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:33.645 19:51:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_9 169 00:12:33.904 03d99d8e-0579-4819-b915-c1dc100a0f4e 00:12:33.904 19:51:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:33.904 19:51:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_10 169 00:12:34.162 5e325068-e2b8-42ec-8ac7-6643181f0755 00:12:34.162 19:51:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:34.162 19:51:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_11 169 00:12:34.421 0ab1a8c0-d3dc-4e98-8577-3e928ab9ecee 00:12:34.421 19:51:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:34.421 19:51:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_12 169 00:12:34.680 b5965e14-ca91-436b-8ff0-52e6eac4fae7 00:12:34.680 19:51:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:34.680 19:51:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_13 169 00:12:34.938 b47665fa-0cf1-41d6-8c3a-2f03fb430837 00:12:34.938 19:51:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:34.938 19:51:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_14 169 00:12:35.196 cecfc10d-e289-4eb8-a0a0-d6f618a6214b 00:12:35.196 19:51:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:35.196 19:51:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_15 169 00:12:35.196 066e2fd0-10f1-437f-9105-d65e889b22ff 00:12:35.196 19:51:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:35.196 19:51:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_16 169 00:12:35.497 c93a650a-3dbe-4546-8e90-499538f8888a 00:12:35.497 19:51:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:35.497 19:51:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_17 169 00:12:35.756 db6aa37d-156f-4ff2-9257-465b8b435b27 00:12:35.756 19:51:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:35.756 19:51:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_18 169 00:12:36.015 02b66384-b229-4e46-aa59-65352ceb62bf 00:12:36.015 19:51:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:36.015 19:51:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_19 169 00:12:36.273 3ab90269-491d-40eb-ac1d-2a86e6415447 00:12:36.273 19:51:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:36.273 19:51:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_20 169 00:12:36.531 edd0aa01-e334-4266-bc1e-da745202b2f4 00:12:36.531 19:51:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:36.531 19:51:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_21 169 00:12:36.789 1186ffb4-0be4-48bd-82b3-f2850a32cc95 00:12:36.789 19:51:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:36.789 19:51:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_22 169 00:12:37.048 182c9026-45bb-4cb1-b899-042115318b6d 00:12:37.048 19:51:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:37.048 19:51:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_23 169 00:12:37.311 78043ea0-6878-4ed2-a071-c82161484d89 00:12:37.311 19:51:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:37.311 19:51:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_24 169 00:12:37.581 0ca38c0b-aa54-46ea-83d1-0a66ee12c132 00:12:37.581 19:51:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:37.581 19:51:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_25 169 00:12:37.840 b1403173-d687-4149-aab0-1e3a68705fc9 00:12:37.840 19:51:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:37.840 19:51:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_26 169 00:12:38.099 20a84d10-2f49-4c89-8e8d-e52fcf658ef6 00:12:38.099 19:51:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:38.099 19:51:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_27 169 00:12:38.357 a439cacc-0190-46bf-af30-d30d98825071 00:12:38.357 19:51:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:38.357 19:51:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_28 169 00:12:38.615 1602db61-f502-44f1-a849-e7dd524f7753 00:12:38.874 19:51:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:38.874 19:51:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_29 169 00:12:38.874 bdccf3f9-e7a3-4470-be55-b1bc093468fb 00:12:39.132 19:51:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:39.132 19:51:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55f20a5f-8186-4de6-81cf-2e8e26c19375 lbd_30 169 00:12:39.132 7f3caa05-fa25-4c9c-8d34-c8e4e87d828f 00:12:39.132 19:51:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # seq 1 30 00:12:39.132 19:51:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:39.132 19:51:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_1:0 00:12:39.132 19:51:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias lvs0/lbd_1:0 1:2 256 -d 00:12:39.391 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:39.391 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_2:0 00:12:39.391 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias lvs0/lbd_2:0 1:2 256 -d 00:12:39.653 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:39.653 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_3:0 00:12:39.653 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias lvs0/lbd_3:0 1:2 256 -d 00:12:39.912 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:39.912 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_4:0 00:12:39.912 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias lvs0/lbd_4:0 1:2 256 -d 00:12:40.169 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:40.169 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_5:0 00:12:40.169 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias lvs0/lbd_5:0 1:2 256 -d 00:12:40.427 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:40.427 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_6:0 00:12:40.427 19:51:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias lvs0/lbd_6:0 1:2 256 -d 00:12:40.686 19:51:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:40.686 19:51:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_7:0 00:12:40.686 19:51:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias lvs0/lbd_7:0 1:2 256 -d 00:12:40.945 19:51:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:40.945 19:51:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_8:0 00:12:40.945 19:51:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias lvs0/lbd_8:0 1:2 256 -d 00:12:41.203 19:51:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:41.203 19:51:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_9:0 00:12:41.203 19:51:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias lvs0/lbd_9:0 1:2 256 -d 00:12:41.461 19:51:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:41.461 19:51:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_10:0 00:12:41.461 19:51:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias lvs0/lbd_10:0 1:2 256 -d 00:12:41.720 19:51:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:41.720 19:51:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_11:0 00:12:41.720 19:51:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target11 Target11_alias lvs0/lbd_11:0 1:2 256 -d 00:12:41.978 19:51:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:41.978 19:51:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_12:0 00:12:41.978 19:51:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target12 Target12_alias lvs0/lbd_12:0 1:2 256 -d 00:12:42.237 19:51:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:42.237 19:51:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_13:0 00:12:42.237 19:51:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target13 Target13_alias lvs0/lbd_13:0 1:2 256 -d 00:12:42.496 19:51:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:42.496 19:51:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_14:0 00:12:42.496 19:51:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target14 Target14_alias lvs0/lbd_14:0 1:2 256 -d 00:12:42.755 19:51:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:42.755 19:51:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_15:0 00:12:42.755 19:51:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target15 Target15_alias lvs0/lbd_15:0 1:2 256 -d 00:12:43.321 19:51:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:43.321 19:51:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_16:0 00:12:43.321 19:51:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target16 Target16_alias lvs0/lbd_16:0 1:2 256 -d 00:12:43.321 19:51:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:43.321 19:51:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_17:0 00:12:43.321 19:51:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target17 Target17_alias lvs0/lbd_17:0 1:2 256 -d 00:12:43.888 19:51:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:43.888 19:51:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_18:0 00:12:43.888 19:51:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target18 Target18_alias lvs0/lbd_18:0 1:2 256 -d 00:12:44.147 19:51:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:44.147 19:51:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_19:0 00:12:44.147 19:51:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target19 Target19_alias lvs0/lbd_19:0 1:2 256 -d 00:12:44.406 19:51:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:44.406 19:51:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_20:0 00:12:44.406 19:51:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target20 Target20_alias lvs0/lbd_20:0 1:2 256 -d 00:12:44.665 19:51:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:44.665 19:51:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_21:0 00:12:44.665 19:51:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target21 Target21_alias lvs0/lbd_21:0 1:2 256 -d 00:12:44.922 19:51:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:44.922 19:51:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_22:0 00:12:44.923 19:51:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target22 Target22_alias lvs0/lbd_22:0 1:2 256 -d 00:12:45.502 19:51:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:45.502 19:51:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_23:0 00:12:45.502 19:51:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target23 Target23_alias lvs0/lbd_23:0 1:2 256 -d 00:12:45.502 19:51:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:45.502 19:51:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_24:0 00:12:45.502 19:51:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target24 Target24_alias lvs0/lbd_24:0 1:2 256 -d 00:12:46.065 19:51:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:46.065 19:51:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_25:0 00:12:46.065 19:51:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target25 Target25_alias lvs0/lbd_25:0 1:2 256 -d 00:12:46.323 19:51:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:46.323 19:51:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_26:0 00:12:46.323 19:51:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target26 Target26_alias lvs0/lbd_26:0 1:2 256 -d 00:12:46.580 19:51:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:46.580 19:51:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_27:0 00:12:46.580 19:51:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target27 Target27_alias lvs0/lbd_27:0 1:2 256 -d 00:12:46.838 19:51:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:46.838 19:51:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_28:0 00:12:46.838 19:51:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target28 Target28_alias lvs0/lbd_28:0 1:2 256 -d 00:12:47.126 19:51:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:47.126 19:51:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_29:0 00:12:47.126 19:51:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target29 Target29_alias lvs0/lbd_29:0 1:2 256 -d 00:12:47.384 19:51:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:12:47.384 19:51:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_30:0 00:12:47.384 19:51:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target30 Target30_alias lvs0/lbd_30:0 1:2 256 -d 00:12:47.642 19:51:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@69 -- # sleep 1 00:12:48.577 Logging into iSCSI target. 00:12:48.577 19:51:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@71 -- # echo 'Logging into iSCSI target.' 00:12:48.577 19:51:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@72 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target16 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target17 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target18 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target19 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target20 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target21 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target22 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target23 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target24 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target25 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target26 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target27 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target28 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target29 00:12:48.577 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target30 00:12:48.577 19:51:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@73 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:48.577 [2024-07-24 19:51:17.216479] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:48.577 [2024-07-24 19:51:17.238799] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:48.835 [2024-07-24 19:51:17.276937] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:48.835 [2024-07-24 19:51:17.279645] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:48.835 [2024-07-24 19:51:17.325383] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:48.835 [2024-07-24 19:51:17.345740] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:48.835 [2024-07-24 19:51:17.377343] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:48.835 [2024-07-24 19:51:17.396675] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:48.835 [2024-07-24 19:51:17.446727] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:48.835 [2024-07-24 19:51:17.472471] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:48.835 [2024-07-24 19:51:17.491822] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.139 [2024-07-24 19:51:17.526754] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.139 [2024-07-24 19:51:17.560212] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:12:49.139 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:12:49.139 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, por[2024-07-24 19:51:17.588885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.139 [2024-07-24 19:51:17.648479] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.139 [2024-07-24 19:51:17.660068] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.139 [2024-07-24 19:51:17.699967] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.139 [2024-07-24 19:51:17.724709] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.139 [2024-07-24 19:51:17.784199] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.139 [2024-07-24 19:51:17.800987] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.398 [2024-07-24 19:51:17.851765] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.398 [2024-07-24 19:51:17.881995] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.398 [2024-07-24 19:51:17.944705] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.398 [2024-07-24 19:51:17.980442] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.398 [2024-07-24 19:51:18.045162] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.655 [2024-07-24 19:51:18.085275] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.655 [2024-07-24 19:51:18.148291] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.655 [2024-07-24 19:51:18.180437] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.655 tal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:12:49.655 Login to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:12:49.656 Login to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:12:49.656 Login to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:12:49.656 Login to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:12:49.656 [2024-07-24 19:51:18.212514] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.656 19:51:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@74 -- # waitforiscsidevices 30 00:12:49.656 19:51:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@116 -- # local num=30 00:12:49.656 19:51:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:49.656 19:51:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:49.656 19:51:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:49.656 19:51:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:49.656 [2024-07-24 19:51:18.223646] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:49.656 19:51:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # n=30 00:12:49.656 19:51:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@120 -- # '[' 30 -ne 30 ']' 00:12:49.656 19:51:18 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@123 -- # return 0 00:12:49.656 19:51:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@76 -- # echo 'Running FIO' 00:12:49.656 Running FIO 00:12:49.656 19:51:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 64 -t randrw -r 5 00:12:49.913 [global] 00:12:49.913 thread=1 00:12:49.913 invalidate=1 00:12:49.913 rw=randrw 00:12:49.913 time_based=1 00:12:49.913 runtime=5 00:12:49.913 ioengine=libaio 00:12:49.913 direct=1 00:12:49.913 bs=131072 00:12:49.913 iodepth=64 00:12:49.913 norandommap=1 00:12:49.913 numjobs=1 00:12:49.913 00:12:49.913 [job0] 00:12:49.913 filename=/dev/sda 00:12:49.913 [job1] 00:12:49.913 filename=/dev/sdb 00:12:49.913 [job2] 00:12:49.913 filename=/dev/sdc 00:12:49.913 [job3] 00:12:49.913 filename=/dev/sdd 00:12:49.913 [job4] 00:12:49.913 filename=/dev/sde 00:12:49.913 [job5] 00:12:49.913 filename=/dev/sdf 00:12:49.913 [job6] 00:12:49.913 filename=/dev/sdg 00:12:49.913 [job7] 00:12:49.913 filename=/dev/sdh 00:12:49.913 [job8] 00:12:49.913 filename=/dev/sdi 00:12:49.913 [job9] 00:12:49.913 filename=/dev/sdj 00:12:49.913 [job10] 00:12:49.913 filename=/dev/sdk 00:12:49.913 [job11] 00:12:49.913 filename=/dev/sdl 00:12:49.913 [job12] 00:12:49.913 filename=/dev/sdm 00:12:49.913 [job13] 00:12:49.913 filename=/dev/sdn 00:12:49.913 [job14] 00:12:49.913 filename=/dev/sdo 00:12:49.913 [job15] 00:12:49.913 filename=/dev/sdp 00:12:49.913 [job16] 00:12:49.913 filename=/dev/sdq 00:12:49.913 [job17] 00:12:49.913 filename=/dev/sdr 00:12:49.913 [job18] 00:12:49.913 filename=/dev/sds 00:12:49.913 [job19] 00:12:49.913 filename=/dev/sdt 00:12:49.913 [job20] 00:12:49.913 filename=/dev/sdu 00:12:49.913 [job21] 00:12:49.913 filename=/dev/sdv 00:12:49.913 [job22] 00:12:49.913 filename=/dev/sdw 00:12:49.913 [job23] 00:12:49.913 filename=/dev/sdx 00:12:49.913 [job24] 00:12:49.913 filename=/dev/sdy 00:12:49.913 [job25] 00:12:49.913 filename=/dev/sdz 00:12:49.913 [job26] 00:12:49.913 filename=/dev/sdaa 00:12:49.913 [job27] 00:12:49.913 filename=/dev/sdab 00:12:49.913 [job28] 00:12:49.913 filename=/dev/sdac 00:12:49.913 [job29] 00:12:49.913 filename=/dev/sdad 00:12:50.479 queue_depth set to 113 (sda) 00:12:50.479 queue_depth set to 113 (sdb) 00:12:50.479 queue_depth set to 113 (sdc) 00:12:50.479 queue_depth set to 113 (sdd) 00:12:50.479 queue_depth set to 113 (sde) 00:12:50.479 queue_depth set to 113 (sdf) 00:12:50.479 queue_depth set to 113 (sdg) 00:12:50.479 queue_depth set to 113 (sdh) 00:12:50.479 queue_depth set to 113 (sdi) 00:12:50.479 queue_depth set to 113 (sdj) 00:12:50.479 queue_depth set to 113 (sdk) 00:12:50.738 queue_depth set to 113 (sdl) 00:12:50.738 queue_depth set to 113 (sdm) 00:12:50.738 queue_depth set to 113 (sdn) 00:12:50.738 queue_depth set to 113 (sdo) 00:12:50.738 queue_depth set to 113 (sdp) 00:12:50.738 queue_depth set to 113 (sdq) 00:12:50.738 queue_depth set to 113 (sdr) 00:12:50.738 queue_depth set to 113 (sds) 00:12:50.738 queue_depth set to 113 (sdt) 00:12:50.738 queue_depth set to 113 (sdu) 00:12:50.738 queue_depth set to 113 (sdv) 00:12:50.738 queue_depth set to 113 (sdw) 00:12:50.996 queue_depth set to 113 (sdx) 00:12:50.996 queue_depth set to 113 (sdy) 00:12:50.996 queue_depth set to 113 (sdz) 00:12:50.996 queue_depth set to 113 (sdaa) 00:12:50.997 queue_depth set to 113 (sdab) 00:12:50.997 queue_depth set to 113 (sdac) 00:12:50.997 queue_depth set to 113 (sdad) 00:12:51.255 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job16: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job17: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job18: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job19: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job20: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job21: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job22: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job23: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job24: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job25: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job26: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job27: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job28: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 job29: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:12:51.255 fio-3.35 00:12:51.255 Starting 30 threads 00:12:51.255 [2024-07-24 19:51:19.723530] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.727880] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.731550] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.734856] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.738483] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.741654] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.744697] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.747860] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.750905] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.753946] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.757252] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.760414] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.763450] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.766454] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.769565] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.772639] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.776341] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.779421] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.784868] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.787843] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.790879] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.795029] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.798503] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.801974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.805376] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.808653] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.812147] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.818352] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.823241] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:51.255 [2024-07-24 19:51:19.827317] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:25.926881] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:25.947102] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:25.955576] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:25.959246] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:25.969556] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:25.974081] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:25.978009] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:25.981602] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:25.985502] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:25.989605] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:25.993520] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:25.997145] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 [2024-07-24 19:51:26.001604] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.817 00:12:57.817 job0: (groupid=0, jobs=1): err= 0: pid=73491: Wed Jul 24 19:51:25 2024 00:12:57.817 read: IOPS=61, BW=7918KiB/s (8108kB/s)(42.2MiB/5464msec) 00:12:57.817 slat (usec): min=9, max=119, avg=29.61, stdev=16.82 00:12:57.817 clat (msec): min=36, max=482, avg=75.74, stdev=53.41 00:12:57.817 lat (msec): min=36, max=482, avg=75.76, stdev=53.41 00:12:57.817 clat percentiles (msec): 00:12:57.817 | 1.00th=[ 43], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:12:57.817 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 62], 00:12:57.817 | 70.00th=[ 67], 80.00th=[ 77], 90.00th=[ 120], 95.00th=[ 163], 00:12:57.817 | 99.00th=[ 468], 99.50th=[ 485], 99.90th=[ 485], 99.95th=[ 485], 00:12:57.817 | 99.99th=[ 485] 00:12:57.817 bw ( KiB/s): min= 4352, max=16384, per=3.33%, avg=8522.00, stdev=3486.63, samples=10 00:12:57.817 iops : min= 34, max= 128, avg=66.40, stdev=27.34, samples=10 00:12:57.817 write: IOPS=66, BW=8574KiB/s (8780kB/s)(45.8MiB/5464msec); 0 zone resets 00:12:57.817 slat (usec): min=11, max=142, avg=41.43, stdev=17.59 00:12:57.817 clat (msec): min=215, max=1360, avg=884.06, stdev=174.34 00:12:57.817 lat (msec): min=215, max=1360, avg=884.10, stdev=174.35 00:12:57.817 clat percentiles (msec): 00:12:57.817 | 1.00th=[ 257], 5.00th=[ 510], 10.00th=[ 667], 20.00th=[ 835], 00:12:57.817 | 30.00th=[ 860], 40.00th=[ 877], 50.00th=[ 885], 60.00th=[ 894], 00:12:57.817 | 70.00th=[ 911], 80.00th=[ 1020], 90.00th=[ 1083], 95.00th=[ 1099], 00:12:57.817 | 99.00th=[ 1301], 99.50th=[ 1351], 99.90th=[ 1368], 99.95th=[ 1368], 00:12:57.817 | 99.99th=[ 1368] 00:12:57.817 bw ( KiB/s): min= 3584, max= 8942, per=3.08%, avg=7855.60, stdev=1674.45, samples=10 00:12:57.817 iops : min= 28, max= 69, avg=61.20, stdev=12.98, samples=10 00:12:57.817 lat (msec) : 50=2.13%, 100=40.34%, 250=5.40%, 500=2.41%, 750=4.55% 00:12:57.817 lat (msec) : 1000=33.38%, 2000=11.79% 00:12:57.817 cpu : usr=0.29%, sys=0.38%, ctx=400, majf=0, minf=1 00:12:57.817 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:12:57.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.817 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.817 issued rwts: total=338,366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.818 job1: (groupid=0, jobs=1): err= 0: pid=73493: Wed Jul 24 19:51:25 2024 00:12:57.818 read: IOPS=59, BW=7591KiB/s (7773kB/s)(40.6MiB/5480msec) 00:12:57.818 slat (usec): min=8, max=917, avg=42.85, stdev=78.09 00:12:57.818 clat (msec): min=39, max=510, avg=78.53, stdev=52.88 00:12:57.818 lat (msec): min=40, max=510, avg=78.57, stdev=52.87 00:12:57.818 clat percentiles (msec): 00:12:57.818 | 1.00th=[ 48], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:12:57.818 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 64], 00:12:57.818 | 70.00th=[ 70], 80.00th=[ 81], 90.00th=[ 122], 95.00th=[ 174], 00:12:57.818 | 99.00th=[ 239], 99.50th=[ 498], 99.90th=[ 510], 99.95th=[ 510], 00:12:57.818 | 99.99th=[ 510] 00:12:57.818 bw ( KiB/s): min= 5888, max=14562, per=3.21%, avg=8213.10, stdev=2748.53, samples=10 00:12:57.818 iops : min= 46, max= 113, avg=64.00, stdev=21.31, samples=10 00:12:57.818 write: IOPS=66, BW=8549KiB/s (8754kB/s)(45.8MiB/5480msec); 0 zone resets 00:12:57.818 slat (usec): min=12, max=833, avg=57.67, stdev=79.20 00:12:57.818 clat (msec): min=231, max=1318, avg=886.84, stdev=168.34 00:12:57.818 lat (msec): min=231, max=1318, avg=886.90, stdev=168.35 00:12:57.818 clat percentiles (msec): 00:12:57.818 | 1.00th=[ 279], 5.00th=[ 542], 10.00th=[ 693], 20.00th=[ 835], 00:12:57.818 | 30.00th=[ 860], 40.00th=[ 885], 50.00th=[ 894], 60.00th=[ 902], 00:12:57.818 | 70.00th=[ 927], 80.00th=[ 1003], 90.00th=[ 1062], 95.00th=[ 1116], 00:12:57.818 | 99.00th=[ 1284], 99.50th=[ 1318], 99.90th=[ 1318], 99.95th=[ 1318], 00:12:57.818 | 99.99th=[ 1318] 00:12:57.818 bw ( KiB/s): min= 3321, max= 8942, per=3.07%, avg=7831.10, stdev=1750.37, samples=10 00:12:57.818 iops : min= 25, max= 69, avg=61.00, stdev=13.89, samples=10 00:12:57.818 lat (msec) : 50=0.58%, 100=39.36%, 250=6.95%, 500=2.17%, 750=4.78% 00:12:57.818 lat (msec) : 1000=35.31%, 2000=10.85% 00:12:57.818 cpu : usr=0.20%, sys=0.42%, ctx=434, majf=0, minf=1 00:12:57.818 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:12:57.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.818 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.818 issued rwts: total=325,366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.818 job2: (groupid=0, jobs=1): err= 0: pid=73499: Wed Jul 24 19:51:25 2024 00:12:57.818 read: IOPS=70, BW=9009KiB/s (9225kB/s)(48.1MiB/5470msec) 00:12:57.818 slat (usec): min=9, max=128, avg=30.02, stdev=17.41 00:12:57.818 clat (msec): min=39, max=504, avg=74.18, stdev=47.10 00:12:57.818 lat (msec): min=39, max=504, avg=74.21, stdev=47.10 00:12:57.818 clat percentiles (msec): 00:12:57.818 | 1.00th=[ 43], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:12:57.818 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:12:57.818 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 95], 95.00th=[ 182], 00:12:57.818 | 99.00th=[ 257], 99.50th=[ 489], 99.90th=[ 506], 99.95th=[ 506], 00:12:57.818 | 99.99th=[ 506] 00:12:57.818 bw ( KiB/s): min= 5632, max=14108, per=3.82%, avg=9778.80, stdev=2880.00, samples=10 00:12:57.818 iops : min= 44, max= 110, avg=76.20, stdev=22.59, samples=10 00:12:57.818 write: IOPS=67, BW=8588KiB/s (8794kB/s)(45.9MiB/5470msec); 0 zone resets 00:12:57.818 slat (usec): min=14, max=7366, avg=61.52, stdev=383.67 00:12:57.818 clat (msec): min=222, max=1326, avg=873.20, stdev=168.81 00:12:57.818 lat (msec): min=229, max=1326, avg=873.26, stdev=168.74 00:12:57.818 clat percentiles (msec): 00:12:57.818 | 1.00th=[ 321], 5.00th=[ 531], 10.00th=[ 642], 20.00th=[ 818], 00:12:57.818 | 30.00th=[ 852], 40.00th=[ 869], 50.00th=[ 885], 60.00th=[ 894], 00:12:57.818 | 70.00th=[ 927], 80.00th=[ 1011], 90.00th=[ 1036], 95.00th=[ 1083], 00:12:57.818 | 99.00th=[ 1301], 99.50th=[ 1318], 99.90th=[ 1334], 99.95th=[ 1334], 00:12:57.818 | 99.99th=[ 1334] 00:12:57.818 bw ( KiB/s): min= 3334, max= 8942, per=3.07%, avg=7830.70, stdev=1746.27, samples=10 00:12:57.818 iops : min= 26, max= 69, avg=61.00, stdev=13.58, samples=10 00:12:57.818 lat (msec) : 50=0.66%, 100=45.74%, 250=4.52%, 500=2.26%, 750=4.79% 00:12:57.818 lat (msec) : 1000=31.65%, 2000=10.37% 00:12:57.818 cpu : usr=0.22%, sys=0.42%, ctx=410, majf=0, minf=1 00:12:57.818 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:12:57.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.818 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.818 issued rwts: total=385,367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.818 job3: (groupid=0, jobs=1): err= 0: pid=73502: Wed Jul 24 19:51:25 2024 00:12:57.818 read: IOPS=63, BW=8167KiB/s (8363kB/s)(43.6MiB/5470msec) 00:12:57.818 slat (usec): min=9, max=453, avg=33.88, stdev=43.48 00:12:57.818 clat (msec): min=39, max=498, avg=76.13, stdev=50.31 00:12:57.818 lat (msec): min=39, max=498, avg=76.16, stdev=50.31 00:12:57.818 clat percentiles (msec): 00:12:57.818 | 1.00th=[ 43], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:12:57.818 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 62], 00:12:57.818 | 70.00th=[ 66], 80.00th=[ 79], 90.00th=[ 125], 95.00th=[ 176], 00:12:57.818 | 99.00th=[ 234], 99.50th=[ 472], 99.90th=[ 498], 99.95th=[ 498], 00:12:57.818 | 99.99th=[ 498] 00:12:57.818 bw ( KiB/s): min= 3584, max=15616, per=3.45%, avg=8828.30, stdev=3195.57, samples=10 00:12:57.818 iops : min= 28, max= 122, avg=68.80, stdev=24.96, samples=10 00:12:57.818 write: IOPS=67, BW=8588KiB/s (8794kB/s)(45.9MiB/5470msec); 0 zone resets 00:12:57.818 slat (usec): min=12, max=437, avg=44.11, stdev=47.93 00:12:57.818 clat (msec): min=223, max=1334, avg=879.95, stdev=170.43 00:12:57.818 lat (msec): min=223, max=1334, avg=879.99, stdev=170.44 00:12:57.818 clat percentiles (msec): 00:12:57.818 | 1.00th=[ 271], 5.00th=[ 514], 10.00th=[ 667], 20.00th=[ 827], 00:12:57.818 | 30.00th=[ 860], 40.00th=[ 869], 50.00th=[ 885], 60.00th=[ 902], 00:12:57.818 | 70.00th=[ 936], 80.00th=[ 995], 90.00th=[ 1070], 95.00th=[ 1116], 00:12:57.818 | 99.00th=[ 1301], 99.50th=[ 1318], 99.90th=[ 1334], 99.95th=[ 1334], 00:12:57.818 | 99.99th=[ 1334] 00:12:57.818 bw ( KiB/s): min= 3584, max= 8942, per=3.08%, avg=7855.70, stdev=1675.08, samples=10 00:12:57.818 iops : min= 28, max= 69, avg=61.20, stdev=13.01, samples=10 00:12:57.818 lat (msec) : 50=0.84%, 100=41.90%, 250=6.01%, 500=2.23%, 750=4.05% 00:12:57.818 lat (msec) : 1000=34.92%, 2000=10.06% 00:12:57.818 cpu : usr=0.20%, sys=0.35%, ctx=472, majf=0, minf=1 00:12:57.818 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:12:57.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.818 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.818 issued rwts: total=349,367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.818 job4: (groupid=0, jobs=1): err= 0: pid=73534: Wed Jul 24 19:51:26 2024 00:12:57.818 read: IOPS=64, BW=8300KiB/s (8500kB/s)(44.2MiB/5459msec) 00:12:57.818 slat (usec): min=9, max=11205, avg=67.12, stdev=596.89 00:12:57.818 clat (msec): min=46, max=506, avg=80.14, stdev=54.20 00:12:57.818 lat (msec): min=46, max=506, avg=80.20, stdev=54.20 00:12:57.818 clat percentiles (msec): 00:12:57.818 | 1.00th=[ 54], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 58], 00:12:57.818 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 64], 00:12:57.818 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 132], 95.00th=[ 167], 00:12:57.818 | 99.00th=[ 464], 99.50th=[ 493], 99.90th=[ 506], 99.95th=[ 506], 00:12:57.818 | 99.99th=[ 506] 00:12:57.818 bw ( KiB/s): min= 4864, max=20480, per=3.49%, avg=8934.40, stdev=4388.57, samples=10 00:12:57.818 iops : min= 38, max= 160, avg=69.80, stdev=34.29, samples=10 00:12:57.818 write: IOPS=67, BW=8582KiB/s (8788kB/s)(45.8MiB/5459msec); 0 zone resets 00:12:57.818 slat (usec): min=16, max=1012, avg=51.13, stdev=85.16 00:12:57.818 clat (msec): min=215, max=1299, avg=875.51, stdev=168.91 00:12:57.818 lat (msec): min=215, max=1300, avg=875.56, stdev=168.92 00:12:57.818 clat percentiles (msec): 00:12:57.818 | 1.00th=[ 279], 5.00th=[ 523], 10.00th=[ 693], 20.00th=[ 776], 00:12:57.818 | 30.00th=[ 860], 40.00th=[ 877], 50.00th=[ 885], 60.00th=[ 902], 00:12:57.818 | 70.00th=[ 927], 80.00th=[ 1011], 90.00th=[ 1062], 95.00th=[ 1099], 00:12:57.818 | 99.00th=[ 1234], 99.50th=[ 1267], 99.90th=[ 1301], 99.95th=[ 1301], 00:12:57.818 | 99.99th=[ 1301] 00:12:57.818 bw ( KiB/s): min= 3584, max= 8960, per=3.08%, avg=7859.20, stdev=1676.75, samples=10 00:12:57.818 iops : min= 28, max= 70, avg=61.40, stdev=13.10, samples=10 00:12:57.818 lat (msec) : 50=0.28%, 100=40.83%, 250=7.92%, 500=2.22%, 750=5.97% 00:12:57.818 lat (msec) : 1000=31.81%, 2000=10.97% 00:12:57.818 cpu : usr=0.20%, sys=0.66%, ctx=461, majf=0, minf=1 00:12:57.818 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:12:57.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.818 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.818 issued rwts: total=354,366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.818 job5: (groupid=0, jobs=1): err= 0: pid=73536: Wed Jul 24 19:51:26 2024 00:12:57.818 read: IOPS=69, BW=8882KiB/s (9095kB/s)(47.4MiB/5462msec) 00:12:57.818 slat (usec): min=9, max=916, avg=28.81, stdev=47.51 00:12:57.818 clat (msec): min=41, max=494, avg=76.79, stdev=53.39 00:12:57.818 lat (msec): min=41, max=494, avg=76.82, stdev=53.38 00:12:57.818 clat percentiles (msec): 00:12:57.818 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:12:57.818 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 62], 00:12:57.818 | 70.00th=[ 67], 80.00th=[ 79], 90.00th=[ 110], 95.00th=[ 192], 00:12:57.818 | 99.00th=[ 468], 99.50th=[ 481], 99.90th=[ 493], 99.95th=[ 493], 00:12:57.818 | 99.99th=[ 493] 00:12:57.818 bw ( KiB/s): min= 6400, max=14592, per=3.75%, avg=9600.00, stdev=2848.78, samples=10 00:12:57.818 iops : min= 50, max= 114, avg=75.00, stdev=22.26, samples=10 00:12:57.818 write: IOPS=67, BW=8601KiB/s (8807kB/s)(45.9MiB/5462msec); 0 zone resets 00:12:57.818 slat (usec): min=13, max=133, avg=38.52, stdev=17.47 00:12:57.818 clat (msec): min=222, max=1317, avg=871.64, stdev=165.79 00:12:57.818 lat (msec): min=222, max=1317, avg=871.68, stdev=165.79 00:12:57.818 clat percentiles (msec): 00:12:57.818 | 1.00th=[ 300], 5.00th=[ 523], 10.00th=[ 676], 20.00th=[ 818], 00:12:57.819 | 30.00th=[ 844], 40.00th=[ 860], 50.00th=[ 877], 60.00th=[ 894], 00:12:57.819 | 70.00th=[ 911], 80.00th=[ 986], 90.00th=[ 1053], 95.00th=[ 1083], 00:12:57.819 | 99.00th=[ 1284], 99.50th=[ 1318], 99.90th=[ 1318], 99.95th=[ 1318], 00:12:57.819 | 99.99th=[ 1318] 00:12:57.819 bw ( KiB/s): min= 3584, max= 8960, per=3.08%, avg=7859.20, stdev=1676.75, samples=10 00:12:57.819 iops : min= 28, max= 70, avg=61.40, stdev=13.10, samples=10 00:12:57.819 lat (msec) : 50=0.27%, 100=44.64%, 250=5.76%, 500=2.28%, 750=4.02% 00:12:57.819 lat (msec) : 1000=34.05%, 2000=8.98% 00:12:57.819 cpu : usr=0.37%, sys=0.26%, ctx=402, majf=0, minf=1 00:12:57.819 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:12:57.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.819 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.819 issued rwts: total=379,367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.819 job6: (groupid=0, jobs=1): err= 0: pid=73552: Wed Jul 24 19:51:26 2024 00:12:57.819 read: IOPS=66, BW=8493KiB/s (8697kB/s)(45.4MiB/5471msec) 00:12:57.819 slat (usec): min=8, max=698, avg=31.73, stdev=52.38 00:12:57.819 clat (msec): min=41, max=498, avg=74.86, stdev=48.09 00:12:57.819 lat (msec): min=41, max=498, avg=74.89, stdev=48.08 00:12:57.819 clat percentiles (msec): 00:12:57.819 | 1.00th=[ 44], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 58], 00:12:57.819 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:12:57.819 | 70.00th=[ 67], 80.00th=[ 78], 90.00th=[ 105], 95.00th=[ 163], 00:12:57.819 | 99.00th=[ 220], 99.50th=[ 485], 99.90th=[ 498], 99.95th=[ 498], 00:12:57.819 | 99.99th=[ 498] 00:12:57.819 bw ( KiB/s): min= 5632, max=14621, per=3.59%, avg=9196.70, stdev=2836.45, samples=10 00:12:57.819 iops : min= 44, max= 114, avg=71.60, stdev=22.11, samples=10 00:12:57.819 write: IOPS=67, BW=8586KiB/s (8792kB/s)(45.9MiB/5471msec); 0 zone resets 00:12:57.819 slat (usec): min=12, max=1754, avg=42.92, stdev=92.07 00:12:57.819 clat (msec): min=220, max=1363, avg=878.48, stdev=173.70 00:12:57.819 lat (msec): min=220, max=1363, avg=878.52, stdev=173.70 00:12:57.819 clat percentiles (msec): 00:12:57.819 | 1.00th=[ 262], 5.00th=[ 506], 10.00th=[ 651], 20.00th=[ 835], 00:12:57.819 | 30.00th=[ 852], 40.00th=[ 860], 50.00th=[ 877], 60.00th=[ 894], 00:12:57.819 | 70.00th=[ 927], 80.00th=[ 1020], 90.00th=[ 1062], 95.00th=[ 1099], 00:12:57.819 | 99.00th=[ 1284], 99.50th=[ 1301], 99.90th=[ 1368], 99.95th=[ 1368], 00:12:57.819 | 99.99th=[ 1368] 00:12:57.819 bw ( KiB/s): min= 3334, max= 8960, per=3.08%, avg=7862.70, stdev=1776.02, samples=10 00:12:57.819 iops : min= 26, max= 70, avg=61.20, stdev=13.76, samples=10 00:12:57.819 lat (msec) : 50=0.82%, 100=43.84%, 250=5.07%, 500=2.47%, 750=3.84% 00:12:57.819 lat (msec) : 1000=33.15%, 2000=10.82% 00:12:57.819 cpu : usr=0.29%, sys=0.31%, ctx=425, majf=0, minf=1 00:12:57.819 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:12:57.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.819 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.819 issued rwts: total=363,367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.819 job7: (groupid=0, jobs=1): err= 0: pid=73553: Wed Jul 24 19:51:26 2024 00:12:57.819 read: IOPS=62, BW=7939KiB/s (8129kB/s)(42.6MiB/5498msec) 00:12:57.819 slat (usec): min=8, max=176, avg=28.90, stdev=18.53 00:12:57.819 clat (msec): min=3, max=518, avg=72.85, stdev=60.29 00:12:57.819 lat (msec): min=3, max=519, avg=72.88, stdev=60.29 00:12:57.819 clat percentiles (msec): 00:12:57.819 | 1.00th=[ 8], 5.00th=[ 15], 10.00th=[ 54], 20.00th=[ 57], 00:12:57.819 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 59], 60.00th=[ 61], 00:12:57.819 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 86], 95.00th=[ 199], 00:12:57.819 | 99.00th=[ 506], 99.50th=[ 518], 99.90th=[ 518], 99.95th=[ 518], 00:12:57.819 | 99.99th=[ 518] 00:12:57.819 bw ( KiB/s): min= 6144, max=15840, per=3.37%, avg=8622.70, stdev=2978.03, samples=10 00:12:57.819 iops : min= 48, max= 123, avg=67.20, stdev=23.14, samples=10 00:12:57.819 write: IOPS=66, BW=8567KiB/s (8773kB/s)(46.0MiB/5498msec); 0 zone resets 00:12:57.819 slat (usec): min=13, max=4053, avg=53.88, stdev=215.30 00:12:57.819 clat (msec): min=15, max=1353, avg=887.11, stdev=187.61 00:12:57.819 lat (msec): min=15, max=1353, avg=887.17, stdev=187.60 00:12:57.819 clat percentiles (msec): 00:12:57.819 | 1.00th=[ 58], 5.00th=[ 535], 10.00th=[ 642], 20.00th=[ 852], 00:12:57.819 | 30.00th=[ 869], 40.00th=[ 877], 50.00th=[ 894], 60.00th=[ 919], 00:12:57.819 | 70.00th=[ 961], 80.00th=[ 995], 90.00th=[ 1070], 95.00th=[ 1116], 00:12:57.819 | 99.00th=[ 1318], 99.50th=[ 1351], 99.90th=[ 1351], 99.95th=[ 1351], 00:12:57.819 | 99.99th=[ 1351] 00:12:57.819 bw ( KiB/s): min= 4087, max= 8960, per=3.09%, avg=7882.10, stdev=1507.76, samples=10 00:12:57.819 iops : min= 31, max= 70, avg=61.40, stdev=11.99, samples=10 00:12:57.819 lat (msec) : 4=0.14%, 10=1.41%, 20=1.41%, 50=0.99%, 100=40.62% 00:12:57.819 lat (msec) : 250=3.67%, 500=1.55%, 750=4.65%, 1000=35.83%, 2000=9.73% 00:12:57.819 cpu : usr=0.22%, sys=0.42%, ctx=395, majf=0, minf=1 00:12:57.819 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:12:57.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.819 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.819 issued rwts: total=341,368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.819 job8: (groupid=0, jobs=1): err= 0: pid=73554: Wed Jul 24 19:51:26 2024 00:12:57.819 read: IOPS=59, BW=7633KiB/s (7816kB/s)(40.8MiB/5467msec) 00:12:57.819 slat (usec): min=9, max=139, avg=27.22, stdev=15.47 00:12:57.819 clat (msec): min=36, max=509, avg=78.24, stdev=52.32 00:12:57.819 lat (msec): min=36, max=509, avg=78.26, stdev=52.32 00:12:57.819 clat percentiles (msec): 00:12:57.819 | 1.00th=[ 50], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:12:57.819 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:12:57.819 | 70.00th=[ 67], 80.00th=[ 82], 90.00th=[ 133], 95.00th=[ 174], 00:12:57.819 | 99.00th=[ 234], 99.50th=[ 493], 99.90th=[ 510], 99.95th=[ 510], 00:12:57.819 | 99.99th=[ 510] 00:12:57.819 bw ( KiB/s): min= 5376, max=16160, per=3.22%, avg=8243.00, stdev=3159.79, samples=10 00:12:57.819 iops : min= 42, max= 126, avg=64.20, stdev=24.62, samples=10 00:12:57.819 write: IOPS=67, BW=8593KiB/s (8799kB/s)(45.9MiB/5467msec); 0 zone resets 00:12:57.819 slat (usec): min=13, max=134, avg=38.67, stdev=15.87 00:12:57.819 clat (msec): min=215, max=1360, avg=882.34, stdev=174.90 00:12:57.819 lat (msec): min=215, max=1360, avg=882.38, stdev=174.90 00:12:57.819 clat percentiles (msec): 00:12:57.819 | 1.00th=[ 271], 5.00th=[ 527], 10.00th=[ 651], 20.00th=[ 827], 00:12:57.819 | 30.00th=[ 860], 40.00th=[ 877], 50.00th=[ 894], 60.00th=[ 911], 00:12:57.819 | 70.00th=[ 944], 80.00th=[ 995], 90.00th=[ 1062], 95.00th=[ 1099], 00:12:57.819 | 99.00th=[ 1301], 99.50th=[ 1351], 99.90th=[ 1368], 99.95th=[ 1368], 00:12:57.819 | 99.99th=[ 1368] 00:12:57.819 bw ( KiB/s): min= 3334, max= 8960, per=3.08%, avg=7856.20, stdev=1769.94, samples=10 00:12:57.819 iops : min= 26, max= 70, avg=61.20, stdev=13.74, samples=10 00:12:57.819 lat (msec) : 50=0.58%, 100=39.39%, 250=7.07%, 500=2.16%, 750=5.63% 00:12:57.819 lat (msec) : 1000=34.63%, 2000=10.53% 00:12:57.819 cpu : usr=0.20%, sys=0.40%, ctx=402, majf=0, minf=1 00:12:57.819 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:12:57.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.819 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.819 issued rwts: total=326,367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.819 job9: (groupid=0, jobs=1): err= 0: pid=73587: Wed Jul 24 19:51:26 2024 00:12:57.819 read: IOPS=71, BW=9155KiB/s (9374kB/s)(49.0MiB/5481msec) 00:12:57.819 slat (usec): min=10, max=6281, avg=52.64, stdev=323.19 00:12:57.819 clat (msec): min=11, max=509, avg=84.71, stdev=71.99 00:12:57.819 lat (msec): min=11, max=509, avg=84.76, stdev=71.98 00:12:57.819 clat percentiles (msec): 00:12:57.819 | 1.00th=[ 15], 5.00th=[ 47], 10.00th=[ 55], 20.00th=[ 57], 00:12:57.819 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:12:57.819 | 70.00th=[ 68], 80.00th=[ 80], 90.00th=[ 127], 95.00th=[ 255], 00:12:57.819 | 99.00th=[ 493], 99.50th=[ 510], 99.90th=[ 510], 99.95th=[ 510], 00:12:57.819 | 99.99th=[ 510] 00:12:57.819 bw ( KiB/s): min= 5888, max=22784, per=3.87%, avg=9904.40, stdev=4924.83, samples=10 00:12:57.819 iops : min= 46, max= 178, avg=77.20, stdev=38.60, samples=10 00:12:57.819 write: IOPS=66, BW=8524KiB/s (8729kB/s)(45.6MiB/5481msec); 0 zone resets 00:12:57.819 slat (usec): min=12, max=3372, avg=62.02, stdev=195.52 00:12:57.819 clat (msec): min=108, max=1303, avg=867.46, stdev=181.34 00:12:57.819 lat (msec): min=109, max=1303, avg=867.52, stdev=181.31 00:12:57.819 clat percentiles (msec): 00:12:57.819 | 1.00th=[ 251], 5.00th=[ 527], 10.00th=[ 642], 20.00th=[ 760], 00:12:57.819 | 30.00th=[ 852], 40.00th=[ 869], 50.00th=[ 877], 60.00th=[ 894], 00:12:57.819 | 70.00th=[ 936], 80.00th=[ 1011], 90.00th=[ 1062], 95.00th=[ 1116], 00:12:57.819 | 99.00th=[ 1267], 99.50th=[ 1301], 99.90th=[ 1301], 99.95th=[ 1301], 00:12:57.819 | 99.99th=[ 1301] 00:12:57.819 bw ( KiB/s): min= 3328, max= 8960, per=3.07%, avg=7830.00, stdev=1726.71, samples=10 00:12:57.819 iops : min= 26, max= 70, avg=61.00, stdev=13.40, samples=10 00:12:57.819 lat (msec) : 20=0.79%, 50=2.25%, 100=40.69%, 250=5.28%, 500=4.49% 00:12:57.819 lat (msec) : 750=7.66%, 1000=28.67%, 2000=10.17% 00:12:57.819 cpu : usr=0.15%, sys=0.49%, ctx=509, majf=0, minf=1 00:12:57.819 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:12:57.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.819 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.819 issued rwts: total=392,365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.819 job10: (groupid=0, jobs=1): err= 0: pid=73605: Wed Jul 24 19:51:26 2024 00:12:57.819 read: IOPS=72, BW=9294KiB/s (9517kB/s)(50.0MiB/5509msec) 00:12:57.819 slat (usec): min=9, max=346, avg=31.64, stdev=28.12 00:12:57.819 clat (usec): min=1859, max=538696, avg=69863.69, stdev=49694.19 00:12:57.820 lat (usec): min=1877, max=538723, avg=69895.33, stdev=49694.20 00:12:57.820 clat percentiles (msec): 00:12:57.820 | 1.00th=[ 4], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:12:57.820 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 59], 60.00th=[ 61], 00:12:57.820 | 70.00th=[ 64], 80.00th=[ 73], 90.00th=[ 86], 95.00th=[ 115], 00:12:57.820 | 99.00th=[ 249], 99.50th=[ 527], 99.90th=[ 542], 99.95th=[ 542], 00:12:57.820 | 99.99th=[ 542] 00:12:57.820 bw ( KiB/s): min= 4864, max=16606, per=3.97%, avg=10159.80, stdev=3337.17, samples=10 00:12:57.820 iops : min= 38, max= 129, avg=79.30, stdev=25.91, samples=10 00:12:57.820 write: IOPS=66, BW=8574KiB/s (8779kB/s)(46.1MiB/5509msec); 0 zone resets 00:12:57.820 slat (usec): min=14, max=3879, avg=59.62, stdev=206.30 00:12:57.820 clat (msec): min=15, max=1330, avg=877.35, stdev=182.34 00:12:57.820 lat (msec): min=18, max=1330, avg=877.41, stdev=182.29 00:12:57.820 clat percentiles (msec): 00:12:57.820 | 1.00th=[ 36], 5.00th=[ 527], 10.00th=[ 676], 20.00th=[ 835], 00:12:57.820 | 30.00th=[ 852], 40.00th=[ 869], 50.00th=[ 885], 60.00th=[ 911], 00:12:57.820 | 70.00th=[ 944], 80.00th=[ 1003], 90.00th=[ 1036], 95.00th=[ 1062], 00:12:57.820 | 99.00th=[ 1284], 99.50th=[ 1334], 99.90th=[ 1334], 99.95th=[ 1334], 00:12:57.820 | 99.99th=[ 1334] 00:12:57.820 bw ( KiB/s): min= 256, max= 8960, per=2.82%, avg=7190.45, stdev=2708.96, samples=11 00:12:57.820 iops : min= 2, max= 70, avg=56.09, stdev=21.27, samples=11 00:12:57.820 lat (msec) : 2=0.13%, 4=0.39%, 10=0.65%, 20=0.65%, 50=1.17% 00:12:57.820 lat (msec) : 100=45.77%, 250=3.38%, 500=1.56%, 750=4.68%, 1000=31.73% 00:12:57.820 lat (msec) : 2000=9.88% 00:12:57.820 cpu : usr=0.27%, sys=0.36%, ctx=466, majf=0, minf=1 00:12:57.820 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:12:57.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.820 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.820 issued rwts: total=400,369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.820 job11: (groupid=0, jobs=1): err= 0: pid=73659: Wed Jul 24 19:51:26 2024 00:12:57.820 read: IOPS=63, BW=8160KiB/s (8356kB/s)(43.8MiB/5490msec) 00:12:57.820 slat (nsec): min=8885, max=72781, avg=27528.63, stdev=13639.58 00:12:57.820 clat (msec): min=36, max=208, avg=75.47, stdev=36.22 00:12:57.820 lat (msec): min=36, max=208, avg=75.49, stdev=36.22 00:12:57.820 clat percentiles (msec): 00:12:57.820 | 1.00th=[ 43], 5.00th=[ 55], 10.00th=[ 55], 20.00th=[ 57], 00:12:57.820 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 62], 00:12:57.820 | 70.00th=[ 66], 80.00th=[ 82], 90.00th=[ 144], 95.00th=[ 163], 00:12:57.820 | 99.00th=[ 203], 99.50th=[ 209], 99.90th=[ 209], 99.95th=[ 209], 00:12:57.820 | 99.99th=[ 209] 00:12:57.820 bw ( KiB/s): min= 4096, max=18725, per=3.50%, avg=8960.40, stdev=4062.00, samples=10 00:12:57.820 iops : min= 32, max= 146, avg=69.80, stdev=31.70, samples=10 00:12:57.820 write: IOPS=67, BW=8580KiB/s (8786kB/s)(46.0MiB/5490msec); 0 zone resets 00:12:57.820 slat (usec): min=13, max=10849, avg=71.87, stdev=563.92 00:12:57.820 clat (msec): min=226, max=1389, avg=879.51, stdev=174.30 00:12:57.820 lat (msec): min=237, max=1389, avg=879.59, stdev=174.19 00:12:57.820 clat percentiles (msec): 00:12:57.820 | 1.00th=[ 268], 5.00th=[ 527], 10.00th=[ 676], 20.00th=[ 793], 00:12:57.820 | 30.00th=[ 844], 40.00th=[ 869], 50.00th=[ 894], 60.00th=[ 919], 00:12:57.820 | 70.00th=[ 944], 80.00th=[ 1011], 90.00th=[ 1053], 95.00th=[ 1083], 00:12:57.820 | 99.00th=[ 1284], 99.50th=[ 1368], 99.90th=[ 1385], 99.95th=[ 1385], 00:12:57.820 | 99.99th=[ 1385] 00:12:57.820 bw ( KiB/s): min= 3078, max= 8960, per=3.06%, avg=7805.00, stdev=1799.42, samples=10 00:12:57.820 iops : min= 24, max= 70, avg=60.80, stdev=13.98, samples=10 00:12:57.820 lat (msec) : 50=0.84%, 100=40.53%, 250=7.80%, 500=1.67%, 750=6.27% 00:12:57.820 lat (msec) : 1000=32.17%, 2000=10.72% 00:12:57.820 cpu : usr=0.24%, sys=0.38%, ctx=431, majf=0, minf=1 00:12:57.820 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:12:57.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.820 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.820 issued rwts: total=350,368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.820 job12: (groupid=0, jobs=1): err= 0: pid=73679: Wed Jul 24 19:51:26 2024 00:12:57.820 read: IOPS=75, BW=9670KiB/s (9902kB/s)(51.6MiB/5467msec) 00:12:57.820 slat (usec): min=8, max=241, avg=25.05, stdev=20.00 00:12:57.820 clat (msec): min=42, max=521, avg=79.45, stdev=53.55 00:12:57.820 lat (msec): min=42, max=521, avg=79.48, stdev=53.55 00:12:57.820 clat percentiles (msec): 00:12:57.820 | 1.00th=[ 50], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 58], 00:12:57.820 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 64], 00:12:57.820 | 70.00th=[ 70], 80.00th=[ 80], 90.00th=[ 131], 95.00th=[ 199], 00:12:57.820 | 99.00th=[ 234], 99.50th=[ 493], 99.90th=[ 523], 99.95th=[ 523], 00:12:57.820 | 99.99th=[ 523] 00:12:57.820 bw ( KiB/s): min= 6387, max=18981, per=4.10%, avg=10496.20, stdev=3377.79, samples=10 00:12:57.820 iops : min= 49, max= 148, avg=81.80, stdev=26.42, samples=10 00:12:57.820 write: IOPS=66, BW=8569KiB/s (8775kB/s)(45.8MiB/5467msec); 0 zone resets 00:12:57.820 slat (usec): min=14, max=7408, avg=63.64, stdev=399.29 00:12:57.820 clat (msec): min=226, max=1342, avg=863.46, stdev=171.99 00:12:57.820 lat (msec): min=234, max=1342, avg=863.52, stdev=171.91 00:12:57.820 clat percentiles (msec): 00:12:57.820 | 1.00th=[ 296], 5.00th=[ 514], 10.00th=[ 659], 20.00th=[ 785], 00:12:57.820 | 30.00th=[ 852], 40.00th=[ 860], 50.00th=[ 869], 60.00th=[ 885], 00:12:57.820 | 70.00th=[ 902], 80.00th=[ 986], 90.00th=[ 1045], 95.00th=[ 1070], 00:12:57.820 | 99.00th=[ 1334], 99.50th=[ 1334], 99.90th=[ 1351], 99.95th=[ 1351], 00:12:57.820 | 99.99th=[ 1351] 00:12:57.820 bw ( KiB/s): min= 3334, max= 8960, per=3.07%, avg=7830.70, stdev=1762.60, samples=10 00:12:57.820 iops : min= 26, max= 70, avg=61.00, stdev=13.70, samples=10 00:12:57.820 lat (msec) : 50=0.77%, 100=45.44%, 250=6.68%, 500=2.18%, 750=6.55% 00:12:57.820 lat (msec) : 1000=30.17%, 2000=8.22% 00:12:57.820 cpu : usr=0.18%, sys=0.37%, ctx=424, majf=0, minf=1 00:12:57.820 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:12:57.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.820 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.820 issued rwts: total=413,366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.820 job13: (groupid=0, jobs=1): err= 0: pid=73705: Wed Jul 24 19:51:26 2024 00:12:57.820 read: IOPS=70, BW=9011KiB/s (9227kB/s)(48.2MiB/5483msec) 00:12:57.820 slat (usec): min=8, max=339, avg=28.25, stdev=29.99 00:12:57.820 clat (msec): min=42, max=521, avg=76.72, stdev=49.79 00:12:57.820 lat (msec): min=42, max=521, avg=76.75, stdev=49.79 00:12:57.820 clat percentiles (msec): 00:12:57.820 | 1.00th=[ 45], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 58], 00:12:57.820 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:12:57.820 | 70.00th=[ 67], 80.00th=[ 80], 90.00th=[ 126], 95.00th=[ 163], 00:12:57.820 | 99.00th=[ 232], 99.50th=[ 493], 99.90th=[ 523], 99.95th=[ 523], 00:12:57.820 | 99.99th=[ 523] 00:12:57.820 bw ( KiB/s): min= 4864, max=18212, per=3.83%, avg=9804.90, stdev=3657.04, samples=10 00:12:57.820 iops : min= 38, max= 142, avg=76.40, stdev=28.57, samples=10 00:12:57.820 write: IOPS=66, BW=8544KiB/s (8749kB/s)(45.8MiB/5483msec); 0 zone resets 00:12:57.820 slat (usec): min=13, max=2536, avg=45.69, stdev=133.53 00:12:57.820 clat (msec): min=235, max=1342, avg=876.25, stdev=169.77 00:12:57.820 lat (msec): min=235, max=1342, avg=876.29, stdev=169.77 00:12:57.820 clat percentiles (msec): 00:12:57.820 | 1.00th=[ 275], 5.00th=[ 535], 10.00th=[ 676], 20.00th=[ 818], 00:12:57.820 | 30.00th=[ 844], 40.00th=[ 869], 50.00th=[ 869], 60.00th=[ 885], 00:12:57.820 | 70.00th=[ 911], 80.00th=[ 1011], 90.00th=[ 1083], 95.00th=[ 1133], 00:12:57.820 | 99.00th=[ 1267], 99.50th=[ 1334], 99.90th=[ 1351], 99.95th=[ 1351], 00:12:57.820 | 99.99th=[ 1351] 00:12:57.820 bw ( KiB/s): min= 3078, max= 8960, per=3.07%, avg=7830.60, stdev=1811.08, samples=10 00:12:57.820 iops : min= 24, max= 70, avg=61.00, stdev=14.06, samples=10 00:12:57.820 lat (msec) : 50=0.80%, 100=43.88%, 250=6.52%, 500=1.73%, 750=5.05% 00:12:57.820 lat (msec) : 1000=31.78%, 2000=10.24% 00:12:57.820 cpu : usr=0.20%, sys=0.40%, ctx=437, majf=0, minf=1 00:12:57.820 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:12:57.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.820 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.820 issued rwts: total=386,366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.820 job14: (groupid=0, jobs=1): err= 0: pid=73706: Wed Jul 24 19:51:26 2024 00:12:57.820 read: IOPS=63, BW=8191KiB/s (8387kB/s)(43.6MiB/5454msec) 00:12:57.820 slat (usec): min=9, max=452, avg=31.33, stdev=43.18 00:12:57.820 clat (msec): min=40, max=460, avg=74.36, stdev=41.11 00:12:57.820 lat (msec): min=40, max=460, avg=74.39, stdev=41.10 00:12:57.820 clat percentiles (msec): 00:12:57.820 | 1.00th=[ 45], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:12:57.820 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 62], 00:12:57.820 | 70.00th=[ 66], 80.00th=[ 77], 90.00th=[ 115], 95.00th=[ 184], 00:12:57.820 | 99.00th=[ 218], 99.50th=[ 226], 99.90th=[ 460], 99.95th=[ 460], 00:12:57.820 | 99.99th=[ 460] 00:12:57.820 bw ( KiB/s): min= 6656, max=14364, per=3.47%, avg=8884.20, stdev=2242.94, samples=10 00:12:57.820 iops : min= 52, max= 112, avg=69.30, stdev=17.47, samples=10 00:12:57.820 write: IOPS=67, BW=8660KiB/s (8868kB/s)(46.1MiB/5454msec); 0 zone resets 00:12:57.820 slat (usec): min=13, max=558, avg=40.64, stdev=51.88 00:12:57.820 clat (msec): min=218, max=1323, avg=874.08, stdev=169.45 00:12:57.820 lat (msec): min=218, max=1323, avg=874.12, stdev=169.45 00:12:57.820 clat percentiles (msec): 00:12:57.820 | 1.00th=[ 309], 5.00th=[ 502], 10.00th=[ 642], 20.00th=[ 818], 00:12:57.820 | 30.00th=[ 860], 40.00th=[ 869], 50.00th=[ 885], 60.00th=[ 902], 00:12:57.820 | 70.00th=[ 927], 80.00th=[ 995], 90.00th=[ 1053], 95.00th=[ 1083], 00:12:57.820 | 99.00th=[ 1267], 99.50th=[ 1318], 99.90th=[ 1318], 99.95th=[ 1318], 00:12:57.821 | 99.99th=[ 1318] 00:12:57.821 bw ( KiB/s): min= 3591, max= 8942, per=3.08%, avg=7858.10, stdev=1673.46, samples=10 00:12:57.821 iops : min= 28, max= 69, avg=61.30, stdev=13.03, samples=10 00:12:57.821 lat (msec) : 50=0.56%, 100=42.76%, 250=5.43%, 500=2.37%, 750=4.87% 00:12:57.821 lat (msec) : 1000=35.10%, 2000=8.91% 00:12:57.821 cpu : usr=0.17%, sys=0.35%, ctx=474, majf=0, minf=1 00:12:57.821 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:12:57.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.821 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.821 issued rwts: total=349,369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.821 job15: (groupid=0, jobs=1): err= 0: pid=73707: Wed Jul 24 19:51:26 2024 00:12:57.821 read: IOPS=63, BW=8149KiB/s (8345kB/s)(43.5MiB/5466msec) 00:12:57.821 slat (usec): min=9, max=917, avg=30.49, stdev=60.31 00:12:57.821 clat (msec): min=46, max=504, avg=79.35, stdev=53.22 00:12:57.821 lat (msec): min=46, max=504, avg=79.39, stdev=53.22 00:12:57.821 clat percentiles (msec): 00:12:57.821 | 1.00th=[ 53], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 58], 00:12:57.821 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:12:57.821 | 70.00th=[ 69], 80.00th=[ 84], 90.00th=[ 128], 95.00th=[ 192], 00:12:57.821 | 99.00th=[ 213], 99.50th=[ 489], 99.90th=[ 506], 99.95th=[ 506], 00:12:57.821 | 99.99th=[ 506] 00:12:57.821 bw ( KiB/s): min= 6144, max=16128, per=3.45%, avg=8831.90, stdev=2944.12, samples=10 00:12:57.821 iops : min= 48, max= 126, avg=68.90, stdev=22.98, samples=10 00:12:57.821 write: IOPS=67, BW=8594KiB/s (8800kB/s)(45.9MiB/5466msec); 0 zone resets 00:12:57.821 slat (usec): min=12, max=470, avg=39.88, stdev=36.83 00:12:57.821 clat (msec): min=220, max=1340, avg=876.44, stdev=171.13 00:12:57.821 lat (msec): min=220, max=1340, avg=876.48, stdev=171.13 00:12:57.821 clat percentiles (msec): 00:12:57.821 | 1.00th=[ 292], 5.00th=[ 527], 10.00th=[ 642], 20.00th=[ 810], 00:12:57.821 | 30.00th=[ 852], 40.00th=[ 869], 50.00th=[ 894], 60.00th=[ 911], 00:12:57.821 | 70.00th=[ 944], 80.00th=[ 995], 90.00th=[ 1053], 95.00th=[ 1083], 00:12:57.821 | 99.00th=[ 1318], 99.50th=[ 1334], 99.90th=[ 1334], 99.95th=[ 1334], 00:12:57.821 | 99.99th=[ 1334] 00:12:57.821 bw ( KiB/s): min= 3328, max= 8977, per=3.08%, avg=7859.10, stdev=1798.52, samples=10 00:12:57.821 iops : min= 26, max= 70, avg=61.30, stdev=14.00, samples=10 00:12:57.821 lat (msec) : 50=0.14%, 100=41.68%, 250=6.85%, 500=1.82%, 750=5.73% 00:12:57.821 lat (msec) : 1000=34.55%, 2000=9.23% 00:12:57.821 cpu : usr=0.15%, sys=0.37%, ctx=476, majf=0, minf=1 00:12:57.821 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:12:57.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.821 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.821 issued rwts: total=348,367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.821 job16: (groupid=0, jobs=1): err= 0: pid=73708: Wed Jul 24 19:51:26 2024 00:12:57.821 read: IOPS=72, BW=9291KiB/s (9514kB/s)(49.8MiB/5483msec) 00:12:57.821 slat (usec): min=9, max=544, avg=29.11, stdev=30.64 00:12:57.821 clat (msec): min=40, max=509, avg=75.67, stdev=44.72 00:12:57.821 lat (msec): min=40, max=509, avg=75.70, stdev=44.72 00:12:57.821 clat percentiles (msec): 00:12:57.821 | 1.00th=[ 42], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:12:57.821 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:12:57.821 | 70.00th=[ 68], 80.00th=[ 82], 90.00th=[ 122], 95.00th=[ 163], 00:12:57.821 | 99.00th=[ 218], 99.50th=[ 481], 99.90th=[ 510], 99.95th=[ 510], 00:12:57.821 | 99.99th=[ 510] 00:12:57.821 bw ( KiB/s): min= 6144, max=20480, per=3.95%, avg=10110.20, stdev=4153.62, samples=10 00:12:57.821 iops : min= 48, max= 160, avg=78.90, stdev=32.48, samples=10 00:12:57.821 write: IOPS=66, BW=8568KiB/s (8773kB/s)(45.9MiB/5483msec); 0 zone resets 00:12:57.821 slat (usec): min=14, max=121, avg=38.22, stdev=17.47 00:12:57.821 clat (msec): min=220, max=1374, avg=872.44, stdev=169.44 00:12:57.821 lat (msec): min=220, max=1374, avg=872.48, stdev=169.44 00:12:57.821 clat percentiles (msec): 00:12:57.821 | 1.00th=[ 266], 5.00th=[ 527], 10.00th=[ 667], 20.00th=[ 793], 00:12:57.821 | 30.00th=[ 852], 40.00th=[ 869], 50.00th=[ 885], 60.00th=[ 894], 00:12:57.821 | 70.00th=[ 919], 80.00th=[ 1003], 90.00th=[ 1062], 95.00th=[ 1099], 00:12:57.821 | 99.00th=[ 1284], 99.50th=[ 1351], 99.90th=[ 1368], 99.95th=[ 1368], 00:12:57.821 | 99.99th=[ 1368] 00:12:57.821 bw ( KiB/s): min= 3072, max= 8942, per=3.07%, avg=7831.80, stdev=1829.77, samples=10 00:12:57.821 iops : min= 24, max= 69, avg=61.10, stdev=14.24, samples=10 00:12:57.821 lat (msec) : 50=1.83%, 100=42.88%, 250=7.45%, 500=1.57%, 750=5.88% 00:12:57.821 lat (msec) : 1000=30.85%, 2000=9.54% 00:12:57.821 cpu : usr=0.15%, sys=0.42%, ctx=428, majf=0, minf=1 00:12:57.821 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:12:57.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.821 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.821 issued rwts: total=398,367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.821 job17: (groupid=0, jobs=1): err= 0: pid=73709: Wed Jul 24 19:51:26 2024 00:12:57.821 read: IOPS=62, BW=8015KiB/s (8207kB/s)(42.8MiB/5462msec) 00:12:57.821 slat (usec): min=9, max=1815, avg=33.41, stdev=98.40 00:12:57.821 clat (msec): min=30, max=498, avg=78.99, stdev=53.01 00:12:57.821 lat (msec): min=30, max=498, avg=79.03, stdev=53.01 00:12:57.821 clat percentiles (msec): 00:12:57.821 | 1.00th=[ 44], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:12:57.821 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:12:57.821 | 70.00th=[ 67], 80.00th=[ 81], 90.00th=[ 133], 95.00th=[ 186], 00:12:57.821 | 99.00th=[ 215], 99.50th=[ 485], 99.90th=[ 498], 99.95th=[ 498], 00:12:57.821 | 99.99th=[ 498] 00:12:57.821 bw ( KiB/s): min= 2816, max=18688, per=3.39%, avg=8678.40, stdev=4419.99, samples=10 00:12:57.821 iops : min= 22, max= 146, avg=67.80, stdev=34.53, samples=10 00:12:57.821 write: IOPS=67, BW=8577KiB/s (8783kB/s)(45.8MiB/5462msec); 0 zone resets 00:12:57.821 slat (usec): min=14, max=321, avg=43.33, stdev=30.39 00:12:57.821 clat (msec): min=215, max=1391, avg=879.29, stdev=180.97 00:12:57.821 lat (msec): min=215, max=1391, avg=879.34, stdev=180.97 00:12:57.821 clat percentiles (msec): 00:12:57.821 | 1.00th=[ 264], 5.00th=[ 523], 10.00th=[ 651], 20.00th=[ 785], 00:12:57.821 | 30.00th=[ 844], 40.00th=[ 869], 50.00th=[ 885], 60.00th=[ 902], 00:12:57.821 | 70.00th=[ 927], 80.00th=[ 1011], 90.00th=[ 1083], 95.00th=[ 1116], 00:12:57.821 | 99.00th=[ 1351], 99.50th=[ 1385], 99.90th=[ 1385], 99.95th=[ 1385], 00:12:57.821 | 99.99th=[ 1385] 00:12:57.821 bw ( KiB/s): min= 3328, max= 8960, per=3.07%, avg=7833.60, stdev=1790.78, samples=10 00:12:57.821 iops : min= 26, max= 70, avg=61.20, stdev=13.99, samples=10 00:12:57.821 lat (msec) : 50=1.13%, 100=39.97%, 250=7.20%, 500=2.40%, 750=5.79% 00:12:57.821 lat (msec) : 1000=32.34%, 2000=11.16% 00:12:57.821 cpu : usr=0.15%, sys=0.40%, ctx=440, majf=0, minf=1 00:12:57.821 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:12:57.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.821 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.821 issued rwts: total=342,366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.821 job18: (groupid=0, jobs=1): err= 0: pid=73710: Wed Jul 24 19:51:26 2024 00:12:57.821 read: IOPS=65, BW=8377KiB/s (8578kB/s)(44.8MiB/5470msec) 00:12:57.821 slat (usec): min=9, max=15581, avg=68.04, stdev=822.29 00:12:57.821 clat (msec): min=13, max=509, avg=76.84, stdev=61.89 00:12:57.821 lat (msec): min=13, max=509, avg=76.91, stdev=61.85 00:12:57.821 clat percentiles (msec): 00:12:57.821 | 1.00th=[ 23], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 58], 00:12:57.821 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 62], 00:12:57.821 | 70.00th=[ 66], 80.00th=[ 74], 90.00th=[ 113], 95.00th=[ 163], 00:12:57.821 | 99.00th=[ 481], 99.50th=[ 493], 99.90th=[ 510], 99.95th=[ 510], 00:12:57.821 | 99.99th=[ 510] 00:12:57.821 bw ( KiB/s): min= 6144, max=16128, per=3.52%, avg=9007.50, stdev=2790.24, samples=10 00:12:57.821 iops : min= 48, max= 126, avg=70.20, stdev=21.80, samples=10 00:12:57.821 write: IOPS=65, BW=8448KiB/s (8650kB/s)(45.1MiB/5470msec); 0 zone resets 00:12:57.821 slat (usec): min=12, max=104, avg=32.60, stdev=11.96 00:12:57.821 clat (msec): min=227, max=1344, avg=889.27, stdev=163.27 00:12:57.821 lat (msec): min=227, max=1344, avg=889.30, stdev=163.27 00:12:57.821 clat percentiles (msec): 00:12:57.821 | 1.00th=[ 266], 5.00th=[ 567], 10.00th=[ 718], 20.00th=[ 852], 00:12:57.821 | 30.00th=[ 869], 40.00th=[ 877], 50.00th=[ 894], 60.00th=[ 902], 00:12:57.821 | 70.00th=[ 919], 80.00th=[ 1003], 90.00th=[ 1062], 95.00th=[ 1099], 00:12:57.821 | 99.00th=[ 1301], 99.50th=[ 1351], 99.90th=[ 1351], 99.95th=[ 1351], 00:12:57.821 | 99.99th=[ 1351] 00:12:57.821 bw ( KiB/s): min= 3072, max= 8960, per=3.05%, avg=7779.00, stdev=1829.63, samples=10 00:12:57.821 iops : min= 24, max= 70, avg=60.60, stdev=14.23, samples=10 00:12:57.821 lat (msec) : 20=0.28%, 50=1.67%, 100=42.00%, 250=5.42%, 500=1.95% 00:12:57.821 lat (msec) : 750=4.17%, 1000=33.94%, 2000=10.57% 00:12:57.821 cpu : usr=0.16%, sys=0.31%, ctx=420, majf=0, minf=1 00:12:57.821 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:12:57.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.821 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.821 issued rwts: total=358,361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.821 job19: (groupid=0, jobs=1): err= 0: pid=73711: Wed Jul 24 19:51:26 2024 00:12:57.821 read: IOPS=74, BW=9554KiB/s (9783kB/s)(51.2MiB/5493msec) 00:12:57.821 slat (usec): min=7, max=450, avg=36.01, stdev=47.17 00:12:57.821 clat (msec): min=12, max=527, avg=75.44, stdev=53.18 00:12:57.821 lat (msec): min=12, max=527, avg=75.47, stdev=53.18 00:12:57.821 clat percentiles (msec): 00:12:57.821 | 1.00th=[ 23], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:12:57.821 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 59], 60.00th=[ 61], 00:12:57.821 | 70.00th=[ 64], 80.00th=[ 75], 90.00th=[ 121], 95.00th=[ 171], 00:12:57.822 | 99.00th=[ 253], 99.50th=[ 498], 99.90th=[ 527], 99.95th=[ 527], 00:12:57.822 | 99.99th=[ 527] 00:12:57.822 bw ( KiB/s): min= 5888, max=18944, per=4.05%, avg=10388.90, stdev=3651.35, samples=10 00:12:57.822 iops : min= 46, max= 148, avg=81.00, stdev=28.48, samples=10 00:12:57.822 write: IOPS=66, BW=8482KiB/s (8686kB/s)(45.5MiB/5493msec); 0 zone resets 00:12:57.822 slat (usec): min=15, max=15366, avg=101.04, stdev=807.49 00:12:57.822 clat (msec): min=238, max=1332, avg=876.50, stdev=168.27 00:12:57.822 lat (msec): min=251, max=1332, avg=876.60, stdev=168.13 00:12:57.822 clat percentiles (msec): 00:12:57.822 | 1.00th=[ 266], 5.00th=[ 527], 10.00th=[ 684], 20.00th=[ 810], 00:12:57.822 | 30.00th=[ 844], 40.00th=[ 860], 50.00th=[ 877], 60.00th=[ 902], 00:12:57.822 | 70.00th=[ 927], 80.00th=[ 1011], 90.00th=[ 1053], 95.00th=[ 1083], 00:12:57.822 | 99.00th=[ 1284], 99.50th=[ 1334], 99.90th=[ 1334], 99.95th=[ 1334], 00:12:57.822 | 99.99th=[ 1334] 00:12:57.822 bw ( KiB/s): min= 2816, max= 8960, per=3.05%, avg=7778.80, stdev=1876.16, samples=10 00:12:57.822 iops : min= 22, max= 70, avg=60.60, stdev=14.57, samples=10 00:12:57.822 lat (msec) : 20=0.39%, 50=1.55%, 100=44.44%, 250=5.94%, 500=2.33% 00:12:57.822 lat (msec) : 750=4.52%, 1000=31.01%, 2000=9.82% 00:12:57.822 cpu : usr=0.15%, sys=0.40%, ctx=528, majf=0, minf=1 00:12:57.822 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:12:57.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.822 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.822 issued rwts: total=410,364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.822 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.822 job20: (groupid=0, jobs=1): err= 0: pid=73712: Wed Jul 24 19:51:26 2024 00:12:57.822 read: IOPS=65, BW=8400KiB/s (8602kB/s)(44.8MiB/5455msec) 00:12:57.822 slat (usec): min=9, max=625, avg=36.65, stdev=46.64 00:12:57.822 clat (msec): min=44, max=487, avg=74.88, stdev=48.23 00:12:57.822 lat (msec): min=44, max=487, avg=74.92, stdev=48.22 00:12:57.822 clat percentiles (msec): 00:12:57.822 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:12:57.822 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:12:57.822 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 118], 95.00th=[ 167], 00:12:57.822 | 99.00th=[ 197], 99.50th=[ 472], 99.90th=[ 489], 99.95th=[ 489], 00:12:57.822 | 99.99th=[ 489] 00:12:57.822 bw ( KiB/s): min= 5120, max=14080, per=3.55%, avg=9086.00, stdev=2812.27, samples=10 00:12:57.822 iops : min= 40, max= 110, avg=70.90, stdev=21.95, samples=10 00:12:57.822 write: IOPS=67, BW=8635KiB/s (8842kB/s)(46.0MiB/5455msec); 0 zone resets 00:12:57.822 slat (usec): min=11, max=221, avg=46.63, stdev=29.14 00:12:57.822 clat (msec): min=214, max=1295, avg=874.36, stdev=168.68 00:12:57.822 lat (msec): min=214, max=1295, avg=874.40, stdev=168.68 00:12:57.822 clat percentiles (msec): 00:12:57.822 | 1.00th=[ 279], 5.00th=[ 502], 10.00th=[ 625], 20.00th=[ 827], 00:12:57.822 | 30.00th=[ 860], 40.00th=[ 877], 50.00th=[ 894], 60.00th=[ 911], 00:12:57.822 | 70.00th=[ 927], 80.00th=[ 995], 90.00th=[ 1053], 95.00th=[ 1083], 00:12:57.822 | 99.00th=[ 1267], 99.50th=[ 1284], 99.90th=[ 1301], 99.95th=[ 1301], 00:12:57.822 | 99.99th=[ 1301] 00:12:57.822 bw ( KiB/s): min= 3584, max= 8942, per=3.08%, avg=7857.40, stdev=1675.45, samples=10 00:12:57.822 iops : min= 28, max= 69, avg=61.30, stdev=13.03, samples=10 00:12:57.822 lat (msec) : 50=0.28%, 100=42.84%, 250=6.20%, 500=2.48%, 750=4.27% 00:12:57.822 lat (msec) : 1000=34.02%, 2000=9.92% 00:12:57.822 cpu : usr=0.26%, sys=0.33%, ctx=434, majf=0, minf=1 00:12:57.822 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:12:57.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.822 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.822 issued rwts: total=358,368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.822 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.822 job21: (groupid=0, jobs=1): err= 0: pid=73713: Wed Jul 24 19:51:26 2024 00:12:57.822 read: IOPS=61, BW=7893KiB/s (8082kB/s)(42.0MiB/5449msec) 00:12:57.823 slat (usec): min=7, max=438, avg=32.08, stdev=43.20 00:12:57.823 clat (msec): min=42, max=500, avg=80.32, stdev=64.52 00:12:57.823 lat (msec): min=42, max=500, avg=80.35, stdev=64.51 00:12:57.823 clat percentiles (msec): 00:12:57.823 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:12:57.823 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:12:57.823 | 70.00th=[ 64], 80.00th=[ 75], 90.00th=[ 132], 95.00th=[ 184], 00:12:57.823 | 99.00th=[ 485], 99.50th=[ 502], 99.90th=[ 502], 99.95th=[ 502], 00:12:57.823 | 99.99th=[ 502] 00:12:57.823 bw ( KiB/s): min= 4352, max=13056, per=3.30%, avg=8448.00, stdev=3215.61, samples=10 00:12:57.823 iops : min= 34, max= 102, avg=66.00, stdev=25.12, samples=10 00:12:57.823 write: IOPS=66, BW=8527KiB/s (8732kB/s)(45.4MiB/5449msec); 0 zone resets 00:12:57.823 slat (usec): min=14, max=447, avg=44.22, stdev=47.33 00:12:57.823 clat (msec): min=223, max=1307, avg=884.82, stdev=161.54 00:12:57.823 lat (msec): min=223, max=1307, avg=884.87, stdev=161.54 00:12:57.823 clat percentiles (msec): 00:12:57.823 | 1.00th=[ 300], 5.00th=[ 527], 10.00th=[ 709], 20.00th=[ 827], 00:12:57.823 | 30.00th=[ 860], 40.00th=[ 885], 50.00th=[ 894], 60.00th=[ 911], 00:12:57.823 | 70.00th=[ 927], 80.00th=[ 1011], 90.00th=[ 1045], 95.00th=[ 1083], 00:12:57.823 | 99.00th=[ 1284], 99.50th=[ 1301], 99.90th=[ 1301], 99.95th=[ 1301], 00:12:57.823 | 99.99th=[ 1301] 00:12:57.823 bw ( KiB/s): min= 3328, max= 8960, per=3.07%, avg=7833.60, stdev=1790.78, samples=10 00:12:57.823 iops : min= 26, max= 70, avg=61.20, stdev=13.99, samples=10 00:12:57.823 lat (msec) : 50=0.43%, 100=40.92%, 250=6.29%, 500=2.29%, 750=4.15% 00:12:57.823 lat (msec) : 1000=33.76%, 2000=12.16% 00:12:57.823 cpu : usr=0.11%, sys=0.39%, ctx=458, majf=0, minf=1 00:12:57.823 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:12:57.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.823 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.823 issued rwts: total=336,363,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.823 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.823 job22: (groupid=0, jobs=1): err= 0: pid=73714: Wed Jul 24 19:51:26 2024 00:12:57.823 read: IOPS=73, BW=9443KiB/s (9670kB/s)(50.5MiB/5476msec) 00:12:57.823 slat (usec): min=9, max=363, avg=33.49, stdev=46.90 00:12:57.823 clat (msec): min=40, max=506, avg=73.03, stdev=46.15 00:12:57.823 lat (msec): min=40, max=506, avg=73.06, stdev=46.15 00:12:57.823 clat percentiles (msec): 00:12:57.823 | 1.00th=[ 43], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 57], 00:12:57.823 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:12:57.823 | 70.00th=[ 65], 80.00th=[ 73], 90.00th=[ 103], 95.00th=[ 150], 00:12:57.823 | 99.00th=[ 215], 99.50th=[ 481], 99.90th=[ 506], 99.95th=[ 506], 00:12:57.823 | 99.99th=[ 506] 00:12:57.823 bw ( KiB/s): min= 6131, max=14877, per=3.99%, avg=10234.70, stdev=2592.27, samples=10 00:12:57.823 iops : min= 47, max= 116, avg=79.70, stdev=20.35, samples=10 00:12:57.823 write: IOPS=66, BW=8555KiB/s (8760kB/s)(45.8MiB/5476msec); 0 zone resets 00:12:57.823 slat (usec): min=11, max=9239, avg=69.66, stdev=483.39 00:12:57.823 clat (msec): min=219, max=1341, avg=873.66, stdev=166.39 00:12:57.823 lat (msec): min=228, max=1341, avg=873.73, stdev=166.30 00:12:57.823 clat percentiles (msec): 00:12:57.823 | 1.00th=[ 266], 5.00th=[ 518], 10.00th=[ 659], 20.00th=[ 835], 00:12:57.823 | 30.00th=[ 852], 40.00th=[ 869], 50.00th=[ 877], 60.00th=[ 894], 00:12:57.823 | 70.00th=[ 919], 80.00th=[ 1011], 90.00th=[ 1062], 95.00th=[ 1083], 00:12:57.823 | 99.00th=[ 1284], 99.50th=[ 1334], 99.90th=[ 1334], 99.95th=[ 1334], 00:12:57.823 | 99.99th=[ 1334] 00:12:57.823 bw ( KiB/s): min= 3078, max= 8960, per=3.07%, avg=7827.50, stdev=1842.89, samples=10 00:12:57.823 iops : min= 24, max= 70, avg=60.90, stdev=14.40, samples=10 00:12:57.823 lat (msec) : 50=1.30%, 100=45.84%, 250=5.32%, 500=1.95%, 750=4.16% 00:12:57.823 lat (msec) : 1000=31.69%, 2000=9.74% 00:12:57.823 cpu : usr=0.13%, sys=0.38%, ctx=570, majf=0, minf=1 00:12:57.823 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:12:57.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.823 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.823 issued rwts: total=404,366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.823 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.823 job23: (groupid=0, jobs=1): err= 0: pid=73715: Wed Jul 24 19:51:26 2024 00:12:57.823 read: IOPS=61, BW=7895KiB/s (8084kB/s)(42.2MiB/5480msec) 00:12:57.823 slat (nsec): min=7856, max=71552, avg=21402.85, stdev=10189.16 00:12:57.823 clat (msec): min=32, max=516, avg=76.97, stdev=52.51 00:12:57.823 lat (msec): min=32, max=516, avg=76.99, stdev=52.50 00:12:57.823 clat percentiles (msec): 00:12:57.823 | 1.00th=[ 43], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 57], 00:12:57.823 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 62], 00:12:57.823 | 70.00th=[ 68], 80.00th=[ 82], 90.00th=[ 121], 95.00th=[ 169], 00:12:57.823 | 99.00th=[ 239], 99.50th=[ 502], 99.90th=[ 518], 99.95th=[ 518], 00:12:57.823 | 99.99th=[ 518] 00:12:57.824 bw ( KiB/s): min= 4096, max=17408, per=3.35%, avg=8572.00, stdev=3678.91, samples=10 00:12:57.824 iops : min= 32, max= 136, avg=66.80, stdev=28.67, samples=10 00:12:57.824 write: IOPS=66, BW=8526KiB/s (8730kB/s)(45.6MiB/5480msec); 0 zone resets 00:12:57.824 slat (nsec): min=11520, max=82188, avg=31574.52, stdev=11568.47 00:12:57.824 clat (msec): min=236, max=1367, avg=888.11, stdev=173.86 00:12:57.824 lat (msec): min=236, max=1367, avg=888.14, stdev=173.86 00:12:57.824 clat percentiles (msec): 00:12:57.824 | 1.00th=[ 271], 5.00th=[ 542], 10.00th=[ 684], 20.00th=[ 844], 00:12:57.824 | 30.00th=[ 869], 40.00th=[ 877], 50.00th=[ 885], 60.00th=[ 911], 00:12:57.824 | 70.00th=[ 927], 80.00th=[ 995], 90.00th=[ 1083], 95.00th=[ 1133], 00:12:57.824 | 99.00th=[ 1334], 99.50th=[ 1351], 99.90th=[ 1368], 99.95th=[ 1368], 00:12:57.824 | 99.99th=[ 1368] 00:12:57.824 bw ( KiB/s): min= 3072, max= 8960, per=3.06%, avg=7804.50, stdev=1801.21, samples=10 00:12:57.824 iops : min= 24, max= 70, avg=60.80, stdev=13.98, samples=10 00:12:57.824 lat (msec) : 50=1.71%, 100=40.40%, 250=5.83%, 500=1.71%, 750=5.97% 00:12:57.824 lat (msec) : 1000=34.42%, 2000=9.96% 00:12:57.824 cpu : usr=0.18%, sys=0.26%, ctx=416, majf=0, minf=1 00:12:57.824 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:12:57.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.824 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.824 issued rwts: total=338,365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.824 job24: (groupid=0, jobs=1): err= 0: pid=73716: Wed Jul 24 19:51:26 2024 00:12:57.824 read: IOPS=71, BW=9191KiB/s (9412kB/s)(49.1MiB/5473msec) 00:12:57.824 slat (usec): min=9, max=9534, avg=50.57, stdev=479.83 00:12:57.824 clat (msec): min=43, max=509, avg=77.53, stdev=55.20 00:12:57.824 lat (msec): min=43, max=509, avg=77.58, stdev=55.20 00:12:57.824 clat percentiles (msec): 00:12:57.824 | 1.00th=[ 50], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 58], 00:12:57.824 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:12:57.824 | 70.00th=[ 66], 80.00th=[ 78], 90.00th=[ 122], 95.00th=[ 174], 00:12:57.824 | 99.00th=[ 481], 99.50th=[ 510], 99.90th=[ 510], 99.95th=[ 510], 00:12:57.824 | 99.99th=[ 510] 00:12:57.824 bw ( KiB/s): min= 6912, max=15360, per=3.89%, avg=9956.20, stdev=2410.88, samples=10 00:12:57.824 iops : min= 54, max= 120, avg=77.70, stdev=18.80, samples=10 00:12:57.824 write: IOPS=66, BW=8513KiB/s (8717kB/s)(45.5MiB/5473msec); 0 zone resets 00:12:57.824 slat (nsec): min=12770, max=86821, avg=35896.09, stdev=14157.23 00:12:57.824 clat (msec): min=228, max=1330, avg=875.32, stdev=166.81 00:12:57.824 lat (msec): min=228, max=1330, avg=875.35, stdev=166.81 00:12:57.824 clat percentiles (msec): 00:12:57.824 | 1.00th=[ 292], 5.00th=[ 542], 10.00th=[ 676], 20.00th=[ 827], 00:12:57.824 | 30.00th=[ 852], 40.00th=[ 860], 50.00th=[ 877], 60.00th=[ 894], 00:12:57.824 | 70.00th=[ 911], 80.00th=[ 1003], 90.00th=[ 1062], 95.00th=[ 1099], 00:12:57.824 | 99.00th=[ 1301], 99.50th=[ 1334], 99.90th=[ 1334], 99.95th=[ 1334], 00:12:57.824 | 99.99th=[ 1334] 00:12:57.824 bw ( KiB/s): min= 3072, max= 8942, per=3.06%, avg=7806.20, stdev=1801.87, samples=10 00:12:57.824 iops : min= 24, max= 69, avg=60.90, stdev=14.02, samples=10 00:12:57.824 lat (msec) : 50=0.53%, 100=45.31%, 250=5.81%, 500=1.72%, 750=4.76% 00:12:57.824 lat (msec) : 1000=32.23%, 2000=9.64% 00:12:57.824 cpu : usr=0.22%, sys=0.37%, ctx=399, majf=0, minf=1 00:12:57.824 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:12:57.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.824 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.824 issued rwts: total=393,364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.824 job25: (groupid=0, jobs=1): err= 0: pid=73717: Wed Jul 24 19:51:26 2024 00:12:57.824 read: IOPS=69, BW=8853KiB/s (9066kB/s)(47.2MiB/5465msec) 00:12:57.824 slat (usec): min=9, max=730, avg=27.72, stdev=38.93 00:12:57.824 clat (msec): min=40, max=504, avg=76.09, stdev=50.84 00:12:57.824 lat (msec): min=40, max=504, avg=76.11, stdev=50.84 00:12:57.824 clat percentiles (msec): 00:12:57.824 | 1.00th=[ 44], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 58], 00:12:57.824 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:12:57.824 | 70.00th=[ 66], 80.00th=[ 77], 90.00th=[ 118], 95.00th=[ 190], 00:12:57.824 | 99.00th=[ 243], 99.50th=[ 477], 99.90th=[ 506], 99.95th=[ 506], 00:12:57.824 | 99.99th=[ 506] 00:12:57.824 bw ( KiB/s): min= 5888, max=15872, per=3.75%, avg=9598.20, stdev=2653.57, samples=10 00:12:57.824 iops : min= 46, max= 124, avg=74.90, stdev=20.77, samples=10 00:12:57.824 write: IOPS=67, BW=8596KiB/s (8802kB/s)(45.9MiB/5465msec); 0 zone resets 00:12:57.824 slat (usec): min=13, max=144, avg=38.47, stdev=17.53 00:12:57.824 clat (msec): min=219, max=1340, avg=873.04, stdev=167.06 00:12:57.824 lat (msec): min=219, max=1340, avg=873.08, stdev=167.06 00:12:57.824 clat percentiles (msec): 00:12:57.824 | 1.00th=[ 292], 5.00th=[ 531], 10.00th=[ 651], 20.00th=[ 818], 00:12:57.824 | 30.00th=[ 852], 40.00th=[ 869], 50.00th=[ 877], 60.00th=[ 885], 00:12:57.824 | 70.00th=[ 911], 80.00th=[ 1003], 90.00th=[ 1062], 95.00th=[ 1099], 00:12:57.824 | 99.00th=[ 1301], 99.50th=[ 1334], 99.90th=[ 1334], 99.95th=[ 1334], 00:12:57.824 | 99.99th=[ 1334] 00:12:57.824 bw ( KiB/s): min= 3328, max= 8960, per=3.08%, avg=7857.40, stdev=1801.40, samples=10 00:12:57.824 iops : min= 26, max= 70, avg=61.30, stdev=14.03, samples=10 00:12:57.824 lat (msec) : 50=1.34%, 100=44.16%, 250=5.23%, 500=1.74%, 750=4.16% 00:12:57.824 lat (msec) : 1000=33.56%, 2000=9.80% 00:12:57.824 cpu : usr=0.18%, sys=0.42%, ctx=440, majf=0, minf=1 00:12:57.824 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.5% 00:12:57.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.824 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.824 issued rwts: total=378,367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.824 job26: (groupid=0, jobs=1): err= 0: pid=73718: Wed Jul 24 19:51:26 2024 00:12:57.824 read: IOPS=60, BW=7770KiB/s (7956kB/s)(41.6MiB/5486msec) 00:12:57.824 slat (usec): min=8, max=504, avg=26.12, stdev=33.64 00:12:57.824 clat (msec): min=5, max=528, avg=75.56, stdev=70.71 00:12:57.824 lat (msec): min=5, max=528, avg=75.59, stdev=70.70 00:12:57.824 clat percentiles (msec): 00:12:57.824 | 1.00th=[ 8], 5.00th=[ 14], 10.00th=[ 45], 20.00th=[ 56], 00:12:57.824 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 59], 60.00th=[ 61], 00:12:57.824 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 95], 95.00th=[ 243], 00:12:57.824 | 99.00th=[ 514], 99.50th=[ 531], 99.90th=[ 531], 99.95th=[ 531], 00:12:57.824 | 99.99th=[ 531] 00:12:57.824 bw ( KiB/s): min= 5620, max=17152, per=3.28%, avg=8395.60, stdev=3232.11, samples=10 00:12:57.824 iops : min= 43, max= 134, avg=65.50, stdev=25.34, samples=10 00:12:57.824 write: IOPS=66, BW=8493KiB/s (8697kB/s)(45.5MiB/5486msec); 0 zone resets 00:12:57.824 slat (usec): min=14, max=4012, avg=46.81, stdev=209.36 00:12:57.824 clat (msec): min=31, max=1348, avg=893.04, stdev=175.50 00:12:57.824 lat (msec): min=35, max=1349, avg=893.09, stdev=175.44 00:12:57.824 clat percentiles (msec): 00:12:57.824 | 1.00th=[ 262], 5.00th=[ 542], 10.00th=[ 684], 20.00th=[ 852], 00:12:57.824 | 30.00th=[ 869], 40.00th=[ 885], 50.00th=[ 894], 60.00th=[ 911], 00:12:57.824 | 70.00th=[ 944], 80.00th=[ 1045], 90.00th=[ 1083], 95.00th=[ 1099], 00:12:57.824 | 99.00th=[ 1301], 99.50th=[ 1334], 99.90th=[ 1351], 99.95th=[ 1351], 00:12:57.824 | 99.99th=[ 1351] 00:12:57.824 bw ( KiB/s): min= 3328, max= 8960, per=3.07%, avg=7831.80, stdev=1727.71, samples=10 00:12:57.824 iops : min= 26, max= 70, avg=61.10, stdev=13.45, samples=10 00:12:57.824 lat (msec) : 10=1.29%, 20=2.30%, 50=1.72%, 100=38.02%, 250=3.16% 00:12:57.824 lat (msec) : 500=2.73%, 750=4.88%, 1000=34.00%, 2000=11.91% 00:12:57.824 cpu : usr=0.13%, sys=0.33%, ctx=462, majf=0, minf=1 00:12:57.824 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:12:57.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.824 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.824 issued rwts: total=333,364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.824 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.824 job27: (groupid=0, jobs=1): err= 0: pid=73719: Wed Jul 24 19:51:26 2024 00:12:57.824 read: IOPS=65, BW=8420KiB/s (8622kB/s)(45.1MiB/5488msec) 00:12:57.824 slat (usec): min=7, max=243, avg=23.99, stdev=18.57 00:12:57.824 clat (msec): min=21, max=523, avg=76.91, stdev=44.20 00:12:57.824 lat (msec): min=21, max=523, avg=76.93, stdev=44.20 00:12:57.824 clat percentiles (msec): 00:12:57.824 | 1.00th=[ 34], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:12:57.824 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 62], 00:12:57.824 | 70.00th=[ 68], 80.00th=[ 82], 90.00th=[ 129], 95.00th=[ 180], 00:12:57.824 | 99.00th=[ 226], 99.50th=[ 259], 99.90th=[ 523], 99.95th=[ 523], 00:12:57.824 | 99.99th=[ 523] 00:12:57.824 bw ( KiB/s): min= 5620, max=18944, per=3.60%, avg=9213.30, stdev=3920.00, samples=10 00:12:57.824 iops : min= 43, max= 148, avg=71.80, stdev=30.77, samples=10 00:12:57.824 write: IOPS=66, BW=8560KiB/s (8765kB/s)(45.9MiB/5488msec); 0 zone resets 00:12:57.824 slat (usec): min=10, max=112, avg=32.94, stdev=14.02 00:12:57.824 clat (msec): min=237, max=1388, avg=879.80, stdev=172.86 00:12:57.824 lat (msec): min=237, max=1388, avg=879.83, stdev=172.86 00:12:57.824 clat percentiles (msec): 00:12:57.824 | 1.00th=[ 292], 5.00th=[ 550], 10.00th=[ 659], 20.00th=[ 793], 00:12:57.824 | 30.00th=[ 852], 40.00th=[ 869], 50.00th=[ 877], 60.00th=[ 885], 00:12:57.824 | 70.00th=[ 936], 80.00th=[ 1011], 90.00th=[ 1062], 95.00th=[ 1116], 00:12:57.824 | 99.00th=[ 1368], 99.50th=[ 1385], 99.90th=[ 1385], 99.95th=[ 1385], 00:12:57.824 | 99.99th=[ 1385] 00:12:57.824 bw ( KiB/s): min= 2816, max= 8942, per=3.06%, avg=7804.40, stdev=1883.88, samples=10 00:12:57.824 iops : min= 22, max= 69, avg=60.80, stdev=14.62, samples=10 00:12:57.824 lat (msec) : 50=1.65%, 100=40.52%, 250=7.14%, 500=1.79%, 750=5.22% 00:12:57.824 lat (msec) : 1000=33.24%, 2000=10.44% 00:12:57.824 cpu : usr=0.26%, sys=0.24%, ctx=450, majf=0, minf=1 00:12:57.824 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:12:57.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.825 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:12:57.825 issued rwts: total=361,367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.825 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.825 job28: (groupid=0, jobs=1): err= 0: pid=73720: Wed Jul 24 19:51:26 2024 00:12:57.825 read: IOPS=76, BW=9740KiB/s (9974kB/s)(52.1MiB/5480msec) 00:12:57.825 slat (usec): min=7, max=116, avg=26.44, stdev=15.22 00:12:57.825 clat (msec): min=39, max=519, avg=77.03, stdev=47.59 00:12:57.825 lat (msec): min=39, max=519, avg=77.05, stdev=47.58 00:12:57.825 clat percentiles (msec): 00:12:57.825 | 1.00th=[ 45], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 57], 00:12:57.825 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:12:57.825 | 70.00th=[ 69], 80.00th=[ 81], 90.00th=[ 128], 95.00th=[ 178], 00:12:57.825 | 99.00th=[ 253], 99.50th=[ 257], 99.90th=[ 518], 99.95th=[ 518], 00:12:57.825 | 99.99th=[ 518] 00:12:57.825 bw ( KiB/s): min= 7409, max=19712, per=4.14%, avg=10616.70, stdev=3624.00, samples=10 00:12:57.825 iops : min= 57, max= 154, avg=82.70, stdev=28.47, samples=10 00:12:57.825 write: IOPS=66, BW=8572KiB/s (8778kB/s)(45.9MiB/5480msec); 0 zone resets 00:12:57.825 slat (usec): min=10, max=2248, avg=44.17, stdev=116.73 00:12:57.825 clat (msec): min=235, max=1369, avg=866.29, stdev=170.44 00:12:57.825 lat (msec): min=235, max=1369, avg=866.33, stdev=170.43 00:12:57.825 clat percentiles (msec): 00:12:57.825 | 1.00th=[ 296], 5.00th=[ 535], 10.00th=[ 659], 20.00th=[ 802], 00:12:57.825 | 30.00th=[ 844], 40.00th=[ 860], 50.00th=[ 869], 60.00th=[ 885], 00:12:57.825 | 70.00th=[ 911], 80.00th=[ 1003], 90.00th=[ 1062], 95.00th=[ 1083], 00:12:57.825 | 99.00th=[ 1334], 99.50th=[ 1368], 99.90th=[ 1368], 99.95th=[ 1368], 00:12:57.825 | 99.99th=[ 1368] 00:12:57.825 bw ( KiB/s): min= 3072, max= 8960, per=3.07%, avg=7826.90, stdev=1848.55, samples=10 00:12:57.825 iops : min= 24, max= 70, avg=60.90, stdev=14.43, samples=10 00:12:57.825 lat (msec) : 50=1.40%, 100=45.03%, 250=6.38%, 500=1.79%, 750=6.63% 00:12:57.825 lat (msec) : 1000=29.46%, 2000=9.31% 00:12:57.825 cpu : usr=0.22%, sys=0.42%, ctx=444, majf=0, minf=1 00:12:57.825 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:12:57.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.825 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.825 issued rwts: total=417,367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.825 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.825 job29: (groupid=0, jobs=1): err= 0: pid=73721: Wed Jul 24 19:51:26 2024 00:12:57.825 read: IOPS=73, BW=9444KiB/s (9671kB/s)(50.4MiB/5462msec) 00:12:57.825 slat (usec): min=8, max=288, avg=23.73, stdev=18.56 00:12:57.825 clat (msec): min=36, max=510, avg=80.12, stdev=57.95 00:12:57.825 lat (msec): min=36, max=510, avg=80.14, stdev=57.95 00:12:57.825 clat percentiles (msec): 00:12:57.825 | 1.00th=[ 43], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 57], 00:12:57.825 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 59], 60.00th=[ 61], 00:12:57.825 | 70.00th=[ 65], 80.00th=[ 87], 90.00th=[ 134], 95.00th=[ 184], 00:12:57.825 | 99.00th=[ 251], 99.50th=[ 498], 99.90th=[ 510], 99.95th=[ 510], 00:12:57.825 | 99.99th=[ 510] 00:12:57.825 bw ( KiB/s): min= 3840, max=23040, per=3.99%, avg=10212.20, stdev=5124.53, samples=10 00:12:57.825 iops : min= 30, max= 180, avg=79.70, stdev=40.02, samples=10 00:12:57.825 write: IOPS=66, BW=8554KiB/s (8759kB/s)(45.6MiB/5462msec); 0 zone resets 00:12:57.825 slat (usec): min=13, max=142, avg=33.82, stdev=16.85 00:12:57.825 clat (msec): min=220, max=1361, avg=867.67, stdev=174.76 00:12:57.825 lat (msec): min=220, max=1361, avg=867.70, stdev=174.77 00:12:57.825 clat percentiles (msec): 00:12:57.825 | 1.00th=[ 300], 5.00th=[ 527], 10.00th=[ 651], 20.00th=[ 751], 00:12:57.825 | 30.00th=[ 844], 40.00th=[ 860], 50.00th=[ 885], 60.00th=[ 902], 00:12:57.825 | 70.00th=[ 927], 80.00th=[ 1003], 90.00th=[ 1053], 95.00th=[ 1099], 00:12:57.825 | 99.00th=[ 1301], 99.50th=[ 1351], 99.90th=[ 1368], 99.95th=[ 1368], 00:12:57.825 | 99.99th=[ 1368] 00:12:57.825 bw ( KiB/s): min= 3328, max= 8960, per=3.07%, avg=7831.80, stdev=1748.66, samples=10 00:12:57.825 iops : min= 26, max= 70, avg=61.10, stdev=13.62, samples=10 00:12:57.825 lat (msec) : 50=1.69%, 100=42.71%, 250=7.55%, 500=2.34%, 750=7.94% 00:12:57.825 lat (msec) : 1000=28.12%, 2000=9.64% 00:12:57.825 cpu : usr=0.18%, sys=0.31%, ctx=465, majf=0, minf=1 00:12:57.825 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:12:57.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.825 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:57.825 issued rwts: total=403,365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.825 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.825 00:12:57.825 Run status group 0 (all jobs): 00:12:57.825 READ: bw=250MiB/s (262MB/s), 7591KiB/s-9740KiB/s (7773kB/s-9974kB/s), io=1378MiB (1445MB), run=5449-5509msec 00:12:57.825 WRITE: bw=249MiB/s (261MB/s), 8448KiB/s-8660KiB/s (8650kB/s-8868kB/s), io=1373MiB (1440MB), run=5449-5509msec 00:12:57.825 00:12:57.825 Disk stats (read/write): 00:12:57.825 sda: ios=386/336, merge=0/0, ticks=23928/293041, in_queue=316970, util=88.60% 00:12:57.825 sdb: ios=373/336, merge=0/0, ticks=24280/293217, in_queue=317498, util=89.95% 00:12:57.825 sdc: ios=433/335, merge=0/0, ticks=27790/289359, in_queue=317149, util=90.40% 00:12:57.825 sdd: ios=397/336, merge=0/0, ticks=25348/291700, in_queue=317049, util=90.36% 00:12:57.825 sde: ios=402/335, merge=0/0, ticks=26746/288684, in_queue=315431, util=90.87% 00:12:57.825 sdf: ios=427/335, merge=0/0, ticks=27492/288248, in_queue=315740, util=90.55% 00:12:57.825 sdg: ios=400/336, merge=0/0, ticks=25943/291055, in_queue=316998, util=90.94% 00:12:57.825 sdh: ios=371/340, merge=0/0, ticks=23065/296114, in_queue=319179, util=92.08% 00:12:57.825 sdi: ios=349/335, merge=0/0, ticks=24240/291953, in_queue=316194, util=91.13% 00:12:57.825 sdj: ios=392/336, merge=0/0, ticks=30943/286269, in_queue=317212, util=91.43% 00:12:57.825 sdk: ios=400/340, merge=0/0, ticks=26479/292718, in_queue=319198, util=92.06% 00:12:57.825 sdl: ios=350/336, merge=0/0, ticks=26388/291009, in_queue=317397, util=92.08% 00:12:57.825 sdm: ios=413/336, merge=0/0, ticks=31379/285002, in_queue=316381, util=91.78% 00:12:57.825 sdn: ios=386/336, merge=0/0, ticks=28254/289793, in_queue=318048, util=92.82% 00:12:57.825 sdo: ios=349/335, mer[2024-07-24 19:51:26.005499] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 ge=0/0, ticks=25498/289933, in_queue=315432, util=92.27% 00:12:57.825 sdp: ios=348/335, merge=0/0, ticks=26276/289698, in_queue=315975, util=93.11% 00:12:57.825 sdq: ios=398/336, merge=0/0, ticks=29215/288309, in_queue=317524, util=93.68% 00:12:57.825 sdr: ios=342/335, merge=0/0, ticks=25698/291311, in_queue=317009, util=93.79% 00:12:57.825 sds: ios=358/336, merge=0/0, ticks=24871/293166, in_queue=318037, util=94.46% 00:12:57.825 sdt: ios=410/336, merge=0/0, ticks=29520/289363, in_queue=318884, util=94.99% 00:12:57.825 sdu: ios=358/335, merge=0/0, ticks=25511/290950, in_queue=316461, util=94.44% 00:12:57.825 sdv: ios=336/335, merge=0/0, ticks=24394/291605, in_queue=315999, util=94.98% 00:12:57.825 sdw: ios=404/336, merge=0/0, ticks=28134/289098, in_queue=317233, util=95.19% 00:12:57.825 sdx: ios=338/336, merge=0/0, ticks=24670/293861, in_queue=318532, util=95.87% 00:12:57.825 sdy: ios=393/335, merge=0/0, ticks=28686/288489, in_queue=317175, util=95.65% 00:12:57.825 sdz: ios=378/335, merge=0/0, ticks=27444/288555, in_queue=315999, util=95.75% 00:12:57.825 sdaa: ios=333/338, merge=0/0, ticks=22837/296502, in_queue=319339, util=96.83% 00:12:57.825 sdab: ios=361/336, merge=0/0, ticks=27282/290543, in_queue=317825, util=96.55% 00:12:57.825 sdac: ios=417/335, merge=0/0, ticks=31188/286006, in_queue=317194, util=96.71% 00:12:57.825 sdad: ios=403/335, merge=0/0, ticks=30486/286104, in_queue=316591, util=97.04% 00:12:57.825 [2024-07-24 19:51:26.008863] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.011802] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.014486] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.017854] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.021346] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.025146] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.028886] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.034332] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.038090] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 19:51:26 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 262144 -d 16 -t randwrite -r 10 00:12:57.825 [2024-07-24 19:51:26.042416] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.045916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.049245] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.053179] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.057236] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.061118] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [2024-07-24 19:51:26.065160] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:57.825 [global] 00:12:57.825 thread=1 00:12:57.825 invalidate=1 00:12:57.825 rw=randwrite 00:12:57.825 time_based=1 00:12:57.825 runtime=10 00:12:57.826 ioengine=libaio 00:12:57.826 direct=1 00:12:57.826 bs=262144 00:12:57.826 iodepth=16 00:12:57.826 norandommap=1 00:12:57.826 numjobs=1 00:12:57.826 00:12:57.826 [job0] 00:12:57.826 filename=/dev/sda 00:12:57.826 [job1] 00:12:57.826 filename=/dev/sdb 00:12:57.826 [job2] 00:12:57.826 filename=/dev/sdc 00:12:57.826 [job3] 00:12:57.826 filename=/dev/sdd 00:12:57.826 [job4] 00:12:57.826 filename=/dev/sde 00:12:57.826 [job5] 00:12:57.826 filename=/dev/sdf 00:12:57.826 [job6] 00:12:57.826 filename=/dev/sdg 00:12:57.826 [job7] 00:12:57.826 filename=/dev/sdh 00:12:57.826 [job8] 00:12:57.826 filename=/dev/sdi 00:12:57.826 [job9] 00:12:57.826 filename=/dev/sdj 00:12:57.826 [job10] 00:12:57.826 filename=/dev/sdk 00:12:57.826 [job11] 00:12:57.826 filename=/dev/sdl 00:12:57.826 [job12] 00:12:57.826 filename=/dev/sdm 00:12:57.826 [job13] 00:12:57.826 filename=/dev/sdn 00:12:57.826 [job14] 00:12:57.826 filename=/dev/sdo 00:12:57.826 [job15] 00:12:57.826 filename=/dev/sdp 00:12:57.826 [job16] 00:12:57.826 filename=/dev/sdq 00:12:57.826 [job17] 00:12:57.826 filename=/dev/sdr 00:12:57.826 [job18] 00:12:57.826 filename=/dev/sds 00:12:57.826 [job19] 00:12:57.826 filename=/dev/sdt 00:12:57.826 [job20] 00:12:57.826 filename=/dev/sdu 00:12:57.826 [job21] 00:12:57.826 filename=/dev/sdv 00:12:57.826 [job22] 00:12:57.826 filename=/dev/sdw 00:12:57.826 [job23] 00:12:57.826 filename=/dev/sdx 00:12:57.826 [job24] 00:12:57.826 filename=/dev/sdy 00:12:57.826 [job25] 00:12:57.826 filename=/dev/sdz 00:12:57.826 [job26] 00:12:57.826 filename=/dev/sdaa 00:12:57.826 [job27] 00:12:57.826 filename=/dev/sdab 00:12:57.826 [job28] 00:12:57.826 filename=/dev/sdac 00:12:57.826 [job29] 00:12:57.826 filename=/dev/sdad 00:12:58.394 queue_depth set to 113 (sda) 00:12:58.394 queue_depth set to 113 (sdb) 00:12:58.394 queue_depth set to 113 (sdc) 00:12:58.394 queue_depth set to 113 (sdd) 00:12:58.394 queue_depth set to 113 (sde) 00:12:58.394 queue_depth set to 113 (sdf) 00:12:58.394 queue_depth set to 113 (sdg) 00:12:58.394 queue_depth set to 113 (sdh) 00:12:58.394 queue_depth set to 113 (sdi) 00:12:58.394 queue_depth set to 113 (sdj) 00:12:58.394 queue_depth set to 113 (sdk) 00:12:58.394 queue_depth set to 113 (sdl) 00:12:58.394 queue_depth set to 113 (sdm) 00:12:58.394 queue_depth set to 113 (sdn) 00:12:58.394 queue_depth set to 113 (sdo) 00:12:58.394 queue_depth set to 113 (sdp) 00:12:58.394 queue_depth set to 113 (sdq) 00:12:58.394 queue_depth set to 113 (sdr) 00:12:58.394 queue_depth set to 113 (sds) 00:12:58.394 queue_depth set to 113 (sdt) 00:12:58.394 queue_depth set to 113 (sdu) 00:12:58.394 queue_depth set to 113 (sdv) 00:12:58.394 queue_depth set to 113 (sdw) 00:12:58.394 queue_depth set to 113 (sdx) 00:12:58.394 queue_depth set to 113 (sdy) 00:12:58.394 queue_depth set to 113 (sdz) 00:12:58.394 queue_depth set to 113 (sdaa) 00:12:58.394 queue_depth set to 113 (sdab) 00:12:58.394 queue_depth set to 113 (sdac) 00:12:58.394 queue_depth set to 113 (sdad) 00:12:58.394 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job11: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job12: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job13: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job14: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job15: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job16: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job17: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job18: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job19: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job20: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job21: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job22: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job23: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job24: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job25: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job26: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job27: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job28: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 job29: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:12:58.394 fio-3.35 00:12:58.394 Starting 30 threads 00:12:58.394 [2024-07-24 19:51:26.958399] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:26.962792] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:26.966680] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:26.971531] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:26.976262] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:26.981202] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:26.986219] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:26.991040] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:26.994096] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:26.997387] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:27.000533] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:27.003562] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:27.006644] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:27.009614] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.394 [2024-07-24 19:51:27.012459] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.015279] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.018171] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.021261] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.024279] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.027389] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.032017] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.036610] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.040392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.042976] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.045502] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.048180] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.050665] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.055177] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.395 [2024-07-24 19:51:27.059535] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:58.724 [2024-07-24 19:51:27.063209] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.897591] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.911605] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.917656] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.921571] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.924728] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.929481] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.936969] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.941010] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.944556] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.948312] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.951814] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.955468] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.959351] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.963496] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.967250] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 [2024-07-24 19:51:37.972147] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.952 00:13:10.952 job0: (groupid=0, jobs=1): err= 0: pid=74226: Wed Jul 24 19:51:37 2024 00:13:10.952 write: IOPS=64, BW=16.2MiB/s (16.9MB/s)(165MiB/10197msec); 0 zone resets 00:13:10.952 slat (usec): min=34, max=326, avg=73.74, stdev=22.71 00:13:10.952 clat (msec): min=24, max=404, avg=247.18, stdev=29.74 00:13:10.952 lat (msec): min=24, max=404, avg=247.25, stdev=29.75 00:13:10.952 clat percentiles (msec): 00:13:10.952 | 1.00th=[ 109], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.952 | 30.00th=[ 234], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.952 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.952 | 99.00th=[ 321], 99.50th=[ 363], 99.90th=[ 405], 99.95th=[ 405], 00:13:10.952 | 99.99th=[ 405] 00:13:10.952 bw ( KiB/s): min=14364, max=18468, per=3.32%, avg=16478.65, stdev=1110.66, samples=20 00:13:10.952 iops : min= 56, max= 72, avg=64.25, stdev= 4.33, samples=20 00:13:10.952 lat (msec) : 50=0.30%, 100=0.61%, 250=43.10%, 500=55.99% 00:13:10.952 cpu : usr=0.28%, sys=0.33%, ctx=668, majf=0, minf=1 00:13:10.952 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.952 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.952 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.952 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.952 job1: (groupid=0, jobs=1): err= 0: pid=74227: Wed Jul 24 19:51:37 2024 00:13:10.952 write: IOPS=64, BW=16.2MiB/s (16.9MB/s)(165MiB/10209msec); 0 zone resets 00:13:10.952 slat (usec): min=39, max=275, avg=71.95, stdev=17.53 00:13:10.952 clat (msec): min=20, max=407, avg=247.09, stdev=30.43 00:13:10.952 lat (msec): min=20, max=407, avg=247.16, stdev=30.43 00:13:10.952 clat percentiles (msec): 00:13:10.952 | 1.00th=[ 103], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.952 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.952 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.952 | 99.00th=[ 321], 99.50th=[ 363], 99.90th=[ 409], 99.95th=[ 409], 00:13:10.952 | 99.99th=[ 409] 00:13:10.952 bw ( KiB/s): min=14336, max=18468, per=3.32%, avg=16488.20, stdev=1125.05, samples=20 00:13:10.952 iops : min= 56, max= 72, avg=64.40, stdev= 4.38, samples=20 00:13:10.952 lat (msec) : 50=0.45%, 100=0.45%, 250=43.03%, 500=56.06% 00:13:10.952 cpu : usr=0.25%, sys=0.27%, ctx=661, majf=0, minf=1 00:13:10.952 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.952 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.952 issued rwts: total=0,660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.952 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.952 job2: (groupid=0, jobs=1): err= 0: pid=74228: Wed Jul 24 19:51:37 2024 00:13:10.952 write: IOPS=65, BW=16.4MiB/s (17.1MB/s)(167MiB/10213msec); 0 zone resets 00:13:10.952 slat (usec): min=25, max=389, avg=78.93, stdev=29.19 00:13:10.952 clat (msec): min=2, max=421, avg=244.20, stdev=41.29 00:13:10.952 lat (msec): min=2, max=421, avg=244.28, stdev=41.29 00:13:10.952 clat percentiles (msec): 00:13:10.952 | 1.00th=[ 14], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.952 | 30.00th=[ 232], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.952 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.952 | 99.00th=[ 338], 99.50th=[ 380], 99.90th=[ 422], 99.95th=[ 422], 00:13:10.952 | 99.99th=[ 422] 00:13:10.952 bw ( KiB/s): min=14336, max=21504, per=3.37%, avg=16708.50, stdev=1588.01, samples=20 00:13:10.952 iops : min= 56, max= 84, avg=65.05, stdev= 6.25, samples=20 00:13:10.952 lat (msec) : 4=0.30%, 10=0.45%, 20=0.45%, 50=0.60%, 100=0.45% 00:13:10.952 lat (msec) : 250=42.37%, 500=55.39% 00:13:10.952 cpu : usr=0.39%, sys=0.28%, ctx=678, majf=0, minf=1 00:13:10.952 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.8%, 32=0.0%, >=64=0.0% 00:13:10.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.952 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.952 issued rwts: total=0,668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.952 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.952 job3: (groupid=0, jobs=1): err= 0: pid=74229: Wed Jul 24 19:51:37 2024 00:13:10.952 write: IOPS=64, BW=16.1MiB/s (16.9MB/s)(165MiB/10202msec); 0 zone resets 00:13:10.952 slat (usec): min=23, max=332, avg=89.51, stdev=40.71 00:13:10.952 clat (msec): min=21, max=412, avg=247.26, stdev=30.51 00:13:10.952 lat (msec): min=21, max=412, avg=247.35, stdev=30.52 00:13:10.952 clat percentiles (msec): 00:13:10.952 | 1.00th=[ 106], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.952 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.952 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.952 | 99.00th=[ 330], 99.50th=[ 372], 99.90th=[ 414], 99.95th=[ 414], 00:13:10.953 | 99.99th=[ 414] 00:13:10.953 bw ( KiB/s): min=14307, max=18468, per=3.32%, avg=16486.80, stdev=1121.15, samples=20 00:13:10.953 iops : min= 55, max= 72, avg=64.15, stdev= 4.52, samples=20 00:13:10.953 lat (msec) : 50=0.46%, 100=0.46%, 250=42.94%, 500=56.15% 00:13:10.953 cpu : usr=0.26%, sys=0.39%, ctx=707, majf=0, minf=1 00:13:10.953 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.953 job4: (groupid=0, jobs=1): err= 0: pid=74230: Wed Jul 24 19:51:37 2024 00:13:10.953 write: IOPS=64, BW=16.2MiB/s (16.9MB/s)(165MiB/10194msec); 0 zone resets 00:13:10.953 slat (usec): min=17, max=162, avg=67.32, stdev=15.19 00:13:10.953 clat (msec): min=23, max=401, avg=247.12, stdev=29.64 00:13:10.953 lat (msec): min=23, max=401, avg=247.19, stdev=29.64 00:13:10.953 clat percentiles (msec): 00:13:10.953 | 1.00th=[ 109], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.953 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.953 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.953 | 99.00th=[ 317], 99.50th=[ 359], 99.90th=[ 401], 99.95th=[ 401], 00:13:10.953 | 99.99th=[ 401] 00:13:10.953 bw ( KiB/s): min=14848, max=18395, per=3.32%, avg=16481.25, stdev=1081.06, samples=20 00:13:10.953 iops : min= 58, max= 71, avg=64.25, stdev= 4.17, samples=20 00:13:10.953 lat (msec) : 50=0.30%, 100=0.61%, 250=43.25%, 500=55.84% 00:13:10.953 cpu : usr=0.30%, sys=0.23%, ctx=660, majf=0, minf=1 00:13:10.953 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.953 job5: (groupid=0, jobs=1): err= 0: pid=74231: Wed Jul 24 19:51:37 2024 00:13:10.953 write: IOPS=64, BW=16.2MiB/s (16.9MB/s)(165MiB/10210msec); 0 zone resets 00:13:10.953 slat (usec): min=38, max=611, avg=97.69, stdev=49.69 00:13:10.953 clat (msec): min=19, max=409, avg=247.08, stdev=30.56 00:13:10.953 lat (msec): min=19, max=409, avg=247.18, stdev=30.56 00:13:10.953 clat percentiles (msec): 00:13:10.953 | 1.00th=[ 103], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.953 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.953 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.953 | 99.00th=[ 326], 99.50th=[ 368], 99.90th=[ 409], 99.95th=[ 409], 00:13:10.953 | 99.99th=[ 409] 00:13:10.953 bw ( KiB/s): min=14336, max=18395, per=3.32%, avg=16483.05, stdev=1109.28, samples=20 00:13:10.953 iops : min= 56, max= 71, avg=64.25, stdev= 4.34, samples=20 00:13:10.953 lat (msec) : 20=0.15%, 50=0.30%, 100=0.45%, 250=43.03%, 500=56.06% 00:13:10.953 cpu : usr=0.23%, sys=0.42%, ctx=728, majf=0, minf=1 00:13:10.953 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 issued rwts: total=0,660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.953 job6: (groupid=0, jobs=1): err= 0: pid=74258: Wed Jul 24 19:51:37 2024 00:13:10.953 write: IOPS=64, BW=16.2MiB/s (16.9MB/s)(165MiB/10196msec); 0 zone resets 00:13:10.953 slat (usec): min=34, max=422, avg=81.92, stdev=30.79 00:13:10.953 clat (msec): min=24, max=402, avg=247.14, stdev=29.68 00:13:10.953 lat (msec): min=24, max=402, avg=247.22, stdev=29.69 00:13:10.953 clat percentiles (msec): 00:13:10.953 | 1.00th=[ 110], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.953 | 30.00th=[ 232], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.953 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.953 | 99.00th=[ 317], 99.50th=[ 359], 99.90th=[ 405], 99.95th=[ 405], 00:13:10.953 | 99.99th=[ 405] 00:13:10.953 bw ( KiB/s): min=14364, max=18468, per=3.32%, avg=16478.65, stdev=1123.11, samples=20 00:13:10.953 iops : min= 56, max= 72, avg=64.25, stdev= 4.38, samples=20 00:13:10.953 lat (msec) : 50=0.30%, 100=0.61%, 250=43.10%, 500=55.99% 00:13:10.953 cpu : usr=0.17%, sys=0.45%, ctx=696, majf=0, minf=1 00:13:10.953 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.953 job7: (groupid=0, jobs=1): err= 0: pid=74265: Wed Jul 24 19:51:37 2024 00:13:10.953 write: IOPS=64, BW=16.2MiB/s (17.0MB/s)(166MiB/10214msec); 0 zone resets 00:13:10.953 slat (usec): min=41, max=161, avg=75.80, stdev=15.70 00:13:10.953 clat (msec): min=5, max=416, avg=246.45, stdev=33.91 00:13:10.953 lat (msec): min=5, max=417, avg=246.53, stdev=33.92 00:13:10.953 clat percentiles (msec): 00:13:10.953 | 1.00th=[ 70], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.953 | 30.00th=[ 232], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.953 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.953 | 99.00th=[ 334], 99.50th=[ 376], 99.90th=[ 418], 99.95th=[ 418], 00:13:10.953 | 99.99th=[ 418] 00:13:10.953 bw ( KiB/s): min=14848, max=18432, per=3.34%, avg=16557.90, stdev=1172.89, samples=20 00:13:10.953 iops : min= 58, max= 72, avg=64.55, stdev= 4.48, samples=20 00:13:10.953 lat (msec) : 10=0.30%, 20=0.15%, 50=0.30%, 100=0.60%, 250=42.60% 00:13:10.953 lat (msec) : 500=56.04% 00:13:10.953 cpu : usr=0.35%, sys=0.31%, ctx=664, majf=0, minf=1 00:13:10.953 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 issued rwts: total=0,662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.953 job8: (groupid=0, jobs=1): err= 0: pid=74266: Wed Jul 24 19:51:37 2024 00:13:10.953 write: IOPS=64, BW=16.1MiB/s (16.9MB/s)(165MiB/10204msec); 0 zone resets 00:13:10.953 slat (usec): min=40, max=311, avg=76.52, stdev=22.19 00:13:10.953 clat (msec): min=21, max=415, avg=247.34, stdev=30.63 00:13:10.953 lat (msec): min=22, max=415, avg=247.42, stdev=30.63 00:13:10.953 clat percentiles (msec): 00:13:10.953 | 1.00th=[ 105], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.953 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.953 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.953 | 99.00th=[ 330], 99.50th=[ 372], 99.90th=[ 418], 99.95th=[ 418], 00:13:10.953 | 99.99th=[ 418] 00:13:10.953 bw ( KiB/s): min=14336, max=18468, per=3.32%, avg=16488.20, stdev=1112.72, samples=20 00:13:10.953 iops : min= 56, max= 72, avg=64.40, stdev= 4.33, samples=20 00:13:10.953 lat (msec) : 50=0.46%, 100=0.46%, 250=42.79%, 500=56.30% 00:13:10.953 cpu : usr=0.21%, sys=0.42%, ctx=673, majf=0, minf=1 00:13:10.953 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.953 job9: (groupid=0, jobs=1): err= 0: pid=74267: Wed Jul 24 19:51:37 2024 00:13:10.953 write: IOPS=64, BW=16.1MiB/s (16.9MB/s)(165MiB/10205msec); 0 zone resets 00:13:10.953 slat (usec): min=26, max=1052, avg=61.36, stdev=41.75 00:13:10.953 clat (msec): min=20, max=416, avg=247.35, stdev=30.77 00:13:10.953 lat (msec): min=21, max=416, avg=247.41, stdev=30.76 00:13:10.953 clat percentiles (msec): 00:13:10.953 | 1.00th=[ 104], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.953 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.953 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.953 | 99.00th=[ 330], 99.50th=[ 376], 99.90th=[ 418], 99.95th=[ 418], 00:13:10.953 | 99.99th=[ 418] 00:13:10.953 bw ( KiB/s): min=14336, max=18395, per=3.32%, avg=16483.05, stdev=1109.28, samples=20 00:13:10.953 iops : min= 56, max= 71, avg=64.25, stdev= 4.34, samples=20 00:13:10.953 lat (msec) : 50=0.46%, 100=0.46%, 250=42.94%, 500=56.15% 00:13:10.953 cpu : usr=0.26%, sys=0.24%, ctx=666, majf=0, minf=1 00:13:10.953 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.953 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.953 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.953 job10: (groupid=0, jobs=1): err= 0: pid=74269: Wed Jul 24 19:51:37 2024 00:13:10.953 write: IOPS=64, BW=16.1MiB/s (16.9MB/s)(165MiB/10213msec); 0 zone resets 00:13:10.953 slat (usec): min=38, max=9304, avg=89.24, stdev=360.09 00:13:10.953 clat (msec): min=19, max=417, avg=247.33, stdev=30.97 00:13:10.953 lat (msec): min=28, max=417, avg=247.42, stdev=30.87 00:13:10.953 clat percentiles (msec): 00:13:10.953 | 1.00th=[ 103], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.953 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.953 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.953 | 99.00th=[ 334], 99.50th=[ 376], 99.90th=[ 418], 99.95th=[ 418], 00:13:10.953 | 99.99th=[ 418] 00:13:10.953 bw ( KiB/s): min=14848, max=18432, per=3.32%, avg=16482.75, stdev=1104.38, samples=20 00:13:10.953 iops : min= 58, max= 72, avg=64.25, stdev= 4.22, samples=20 00:13:10.953 lat (msec) : 20=0.15%, 50=0.30%, 100=0.46%, 250=43.10%, 500=55.99% 00:13:10.953 cpu : usr=0.28%, sys=0.36%, ctx=669, majf=0, minf=1 00:13:10.953 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.954 job11: (groupid=0, jobs=1): err= 0: pid=74310: Wed Jul 24 19:51:37 2024 00:13:10.954 write: IOPS=64, BW=16.2MiB/s (16.9MB/s)(165MiB/10210msec); 0 zone resets 00:13:10.954 slat (usec): min=38, max=190, avg=73.35, stdev=15.97 00:13:10.954 clat (msec): min=19, max=408, avg=247.11, stdev=30.53 00:13:10.954 lat (msec): min=19, max=409, avg=247.19, stdev=30.54 00:13:10.954 clat percentiles (msec): 00:13:10.954 | 1.00th=[ 103], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.954 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.954 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.954 | 99.00th=[ 326], 99.50th=[ 368], 99.90th=[ 409], 99.95th=[ 409], 00:13:10.954 | 99.99th=[ 409] 00:13:10.954 bw ( KiB/s): min=14336, max=18395, per=3.32%, avg=16483.05, stdev=1109.28, samples=20 00:13:10.954 iops : min= 56, max= 71, avg=64.25, stdev= 4.34, samples=20 00:13:10.954 lat (msec) : 20=0.15%, 50=0.30%, 100=0.45%, 250=43.03%, 500=56.06% 00:13:10.954 cpu : usr=0.24%, sys=0.40%, ctx=661, majf=0, minf=1 00:13:10.954 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 issued rwts: total=0,660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.954 job12: (groupid=0, jobs=1): err= 0: pid=74349: Wed Jul 24 19:51:37 2024 00:13:10.954 write: IOPS=64, BW=16.2MiB/s (16.9MB/s)(165MiB/10196msec); 0 zone resets 00:13:10.954 slat (usec): min=24, max=216, avg=71.43, stdev=16.93 00:13:10.954 clat (msec): min=24, max=402, avg=247.16, stdev=29.68 00:13:10.954 lat (msec): min=24, max=403, avg=247.23, stdev=29.69 00:13:10.954 clat percentiles (msec): 00:13:10.954 | 1.00th=[ 110], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.954 | 30.00th=[ 234], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.954 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.954 | 99.00th=[ 317], 99.50th=[ 359], 99.90th=[ 405], 99.95th=[ 405], 00:13:10.954 | 99.99th=[ 405] 00:13:10.954 bw ( KiB/s): min=14364, max=18468, per=3.32%, avg=16478.65, stdev=1110.66, samples=20 00:13:10.954 iops : min= 56, max= 72, avg=64.25, stdev= 4.33, samples=20 00:13:10.954 lat (msec) : 50=0.30%, 100=0.61%, 250=43.10%, 500=55.99% 00:13:10.954 cpu : usr=0.30%, sys=0.32%, ctx=670, majf=0, minf=1 00:13:10.954 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.954 job13: (groupid=0, jobs=1): err= 0: pid=74416: Wed Jul 24 19:51:37 2024 00:13:10.954 write: IOPS=64, BW=16.2MiB/s (17.0MB/s)(165MiB/10209msec); 0 zone resets 00:13:10.954 slat (usec): min=38, max=189, avg=71.39, stdev=16.24 00:13:10.954 clat (msec): min=6, max=419, avg=246.70, stdev=33.62 00:13:10.954 lat (msec): min=7, max=419, avg=246.77, stdev=33.62 00:13:10.954 clat percentiles (msec): 00:13:10.954 | 1.00th=[ 73], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.954 | 30.00th=[ 234], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.954 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.954 | 99.00th=[ 334], 99.50th=[ 376], 99.90th=[ 422], 99.95th=[ 422], 00:13:10.954 | 99.99th=[ 422] 00:13:10.954 bw ( KiB/s): min=14336, max=18395, per=3.33%, avg=16532.30, stdev=1146.89, samples=20 00:13:10.954 iops : min= 56, max= 71, avg=64.45, stdev= 4.37, samples=20 00:13:10.954 lat (msec) : 10=0.15%, 20=0.15%, 50=0.45%, 100=0.45%, 250=42.97% 00:13:10.954 lat (msec) : 500=55.82% 00:13:10.954 cpu : usr=0.27%, sys=0.24%, ctx=664, majf=0, minf=1 00:13:10.954 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 issued rwts: total=0,661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.954 job14: (groupid=0, jobs=1): err= 0: pid=74421: Wed Jul 24 19:51:37 2024 00:13:10.954 write: IOPS=64, BW=16.1MiB/s (16.9MB/s)(165MiB/10204msec); 0 zone resets 00:13:10.954 slat (usec): min=26, max=160, avg=60.41, stdev=13.85 00:13:10.954 clat (msec): min=22, max=413, avg=247.36, stdev=30.44 00:13:10.954 lat (msec): min=22, max=413, avg=247.42, stdev=30.44 00:13:10.954 clat percentiles (msec): 00:13:10.954 | 1.00th=[ 107], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.954 | 30.00th=[ 234], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.954 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.954 | 99.00th=[ 330], 99.50th=[ 372], 99.90th=[ 414], 99.95th=[ 414], 00:13:10.954 | 99.99th=[ 414] 00:13:10.954 bw ( KiB/s): min=14336, max=18468, per=3.32%, avg=16489.85, stdev=1125.70, samples=20 00:13:10.954 iops : min= 56, max= 72, avg=64.40, stdev= 4.38, samples=20 00:13:10.954 lat (msec) : 50=0.46%, 100=0.46%, 250=42.94%, 500=56.15% 00:13:10.954 cpu : usr=0.26%, sys=0.23%, ctx=666, majf=0, minf=1 00:13:10.954 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.954 job15: (groupid=0, jobs=1): err= 0: pid=74423: Wed Jul 24 19:51:37 2024 00:13:10.954 write: IOPS=64, BW=16.2MiB/s (17.0MB/s)(166MiB/10212msec); 0 zone resets 00:13:10.954 slat (usec): min=36, max=1108, avg=67.39, stdev=45.75 00:13:10.954 clat (msec): min=6, max=420, avg=246.02, stdev=35.68 00:13:10.954 lat (msec): min=6, max=420, avg=246.09, stdev=35.68 00:13:10.954 clat percentiles (msec): 00:13:10.954 | 1.00th=[ 52], 5.00th=[ 222], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.954 | 30.00th=[ 232], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.954 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.954 | 99.00th=[ 334], 99.50th=[ 380], 99.90th=[ 422], 99.95th=[ 422], 00:13:10.954 | 99.99th=[ 422] 00:13:10.954 bw ( KiB/s): min=14336, max=18944, per=3.34%, avg=16580.50, stdev=1236.81, samples=20 00:13:10.954 iops : min= 56, max= 74, avg=64.55, stdev= 4.87, samples=20 00:13:10.954 lat (msec) : 10=0.15%, 20=0.45%, 50=0.30%, 100=0.60%, 250=42.53% 00:13:10.954 lat (msec) : 500=55.96% 00:13:10.954 cpu : usr=0.28%, sys=0.24%, ctx=679, majf=0, minf=1 00:13:10.954 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 issued rwts: total=0,663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.954 job16: (groupid=0, jobs=1): err= 0: pid=74424: Wed Jul 24 19:51:37 2024 00:13:10.954 write: IOPS=64, BW=16.2MiB/s (16.9MB/s)(165MiB/10211msec); 0 zone resets 00:13:10.954 slat (usec): min=32, max=780, avg=66.08, stdev=51.65 00:13:10.954 clat (msec): min=20, max=409, avg=247.14, stdev=30.46 00:13:10.954 lat (msec): min=20, max=409, avg=247.21, stdev=30.47 00:13:10.954 clat percentiles (msec): 00:13:10.954 | 1.00th=[ 104], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.954 | 30.00th=[ 232], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.954 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.954 | 99.00th=[ 326], 99.50th=[ 368], 99.90th=[ 409], 99.95th=[ 409], 00:13:10.954 | 99.99th=[ 409] 00:13:10.954 bw ( KiB/s): min=14336, max=18395, per=3.32%, avg=16481.40, stdev=1108.61, samples=20 00:13:10.954 iops : min= 56, max= 71, avg=64.25, stdev= 4.34, samples=20 00:13:10.954 lat (msec) : 50=0.45%, 100=0.45%, 250=43.03%, 500=56.06% 00:13:10.954 cpu : usr=0.32%, sys=0.18%, ctx=678, majf=0, minf=1 00:13:10.954 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 issued rwts: total=0,660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.954 job17: (groupid=0, jobs=1): err= 0: pid=74425: Wed Jul 24 19:51:37 2024 00:13:10.954 write: IOPS=64, BW=16.1MiB/s (16.9MB/s)(165MiB/10202msec); 0 zone resets 00:13:10.954 slat (usec): min=35, max=149, avg=71.15, stdev=15.43 00:13:10.954 clat (msec): min=22, max=411, avg=247.29, stdev=30.35 00:13:10.954 lat (msec): min=22, max=411, avg=247.36, stdev=30.36 00:13:10.954 clat percentiles (msec): 00:13:10.954 | 1.00th=[ 107], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.954 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.954 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.954 | 99.00th=[ 326], 99.50th=[ 368], 99.90th=[ 414], 99.95th=[ 414], 00:13:10.954 | 99.99th=[ 414] 00:13:10.954 bw ( KiB/s): min=14307, max=18468, per=3.32%, avg=16488.45, stdev=1134.04, samples=20 00:13:10.954 iops : min= 55, max= 72, avg=64.15, stdev= 4.57, samples=20 00:13:10.954 lat (msec) : 50=0.46%, 100=0.46%, 250=43.10%, 500=55.99% 00:13:10.954 cpu : usr=0.37%, sys=0.25%, ctx=659, majf=0, minf=1 00:13:10.954 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.954 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.954 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.954 job18: (groupid=0, jobs=1): err= 0: pid=74426: Wed Jul 24 19:51:37 2024 00:13:10.954 write: IOPS=64, BW=16.2MiB/s (16.9MB/s)(165MiB/10210msec); 0 zone resets 00:13:10.954 slat (usec): min=21, max=172, avg=71.45, stdev=15.81 00:13:10.954 clat (msec): min=18, max=410, avg=247.13, stdev=30.68 00:13:10.954 lat (msec): min=18, max=410, avg=247.20, stdev=30.69 00:13:10.954 clat percentiles (msec): 00:13:10.955 | 1.00th=[ 102], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.955 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.955 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.955 | 99.00th=[ 326], 99.50th=[ 368], 99.90th=[ 409], 99.95th=[ 409], 00:13:10.955 | 99.99th=[ 409] 00:13:10.955 bw ( KiB/s): min=14336, max=18432, per=3.33%, avg=16506.95, stdev=1159.97, samples=20 00:13:10.955 iops : min= 56, max= 72, avg=64.35, stdev= 4.52, samples=20 00:13:10.955 lat (msec) : 20=0.15%, 50=0.30%, 100=0.45%, 250=43.18%, 500=55.91% 00:13:10.955 cpu : usr=0.33%, sys=0.29%, ctx=667, majf=0, minf=1 00:13:10.955 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 issued rwts: total=0,660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.955 job19: (groupid=0, jobs=1): err= 0: pid=74427: Wed Jul 24 19:51:37 2024 00:13:10.955 write: IOPS=64, BW=16.2MiB/s (17.0MB/s)(165MiB/10220msec); 0 zone resets 00:13:10.955 slat (usec): min=36, max=6695, avg=81.39, stdev=258.26 00:13:10.955 clat (msec): min=6, max=418, avg=246.71, stdev=33.26 00:13:10.955 lat (msec): min=12, max=418, avg=246.79, stdev=33.19 00:13:10.955 clat percentiles (msec): 00:13:10.955 | 1.00th=[ 78], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.955 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.955 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.955 | 99.00th=[ 334], 99.50th=[ 376], 99.90th=[ 418], 99.95th=[ 418], 00:13:10.955 | 99.99th=[ 418] 00:13:10.955 bw ( KiB/s): min=14336, max=18432, per=3.33%, avg=16529.25, stdev=1163.21, samples=20 00:13:10.955 iops : min= 56, max= 72, avg=64.35, stdev= 4.56, samples=20 00:13:10.955 lat (msec) : 10=0.15%, 20=0.15%, 50=0.30%, 100=0.61%, 250=42.97% 00:13:10.955 lat (msec) : 500=55.82% 00:13:10.955 cpu : usr=0.34%, sys=0.27%, ctx=664, majf=0, minf=1 00:13:10.955 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 issued rwts: total=0,661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.955 job20: (groupid=0, jobs=1): err= 0: pid=74428: Wed Jul 24 19:51:37 2024 00:13:10.955 write: IOPS=65, BW=16.3MiB/s (17.1MB/s)(166MiB/10220msec); 0 zone resets 00:13:10.955 slat (usec): min=32, max=170, avg=74.99, stdev=15.71 00:13:10.955 clat (msec): min=2, max=418, avg=245.47, stdev=37.15 00:13:10.955 lat (msec): min=2, max=418, avg=245.54, stdev=37.15 00:13:10.955 clat percentiles (msec): 00:13:10.955 | 1.00th=[ 34], 5.00th=[ 222], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.955 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.955 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.955 | 99.00th=[ 334], 99.50th=[ 376], 99.90th=[ 418], 99.95th=[ 418], 00:13:10.955 | 99.99th=[ 418] 00:13:10.955 bw ( KiB/s): min=14336, max=19456, per=3.35%, avg=16631.65, stdev=1323.71, samples=20 00:13:10.955 iops : min= 56, max= 76, avg=64.75, stdev= 5.20, samples=20 00:13:10.955 lat (msec) : 4=0.15%, 10=0.30%, 20=0.30%, 50=0.45%, 100=0.45% 00:13:10.955 lat (msec) : 250=42.86%, 500=55.49% 00:13:10.955 cpu : usr=0.29%, sys=0.37%, ctx=670, majf=0, minf=1 00:13:10.955 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 issued rwts: total=0,665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.955 job21: (groupid=0, jobs=1): err= 0: pid=74429: Wed Jul 24 19:51:37 2024 00:13:10.955 write: IOPS=64, BW=16.2MiB/s (17.0MB/s)(166MiB/10215msec); 0 zone resets 00:13:10.955 slat (usec): min=35, max=5256, avg=84.03, stdev=218.62 00:13:10.955 clat (msec): min=3, max=420, avg=245.94, stdev=36.15 00:13:10.955 lat (msec): min=5, max=420, avg=246.02, stdev=36.08 00:13:10.955 clat percentiles (msec): 00:13:10.955 | 1.00th=[ 48], 5.00th=[ 222], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.955 | 30.00th=[ 232], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.955 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.955 | 99.00th=[ 334], 99.50th=[ 376], 99.90th=[ 422], 99.95th=[ 422], 00:13:10.955 | 99.99th=[ 422] 00:13:10.955 bw ( KiB/s): min=14336, max=18944, per=3.34%, avg=16580.55, stdev=1226.48, samples=20 00:13:10.955 iops : min= 56, max= 74, avg=64.55, stdev= 4.85, samples=20 00:13:10.955 lat (msec) : 4=0.15%, 10=0.30%, 20=0.30%, 50=0.30%, 100=0.45% 00:13:10.955 lat (msec) : 250=42.53%, 500=55.96% 00:13:10.955 cpu : usr=0.30%, sys=0.33%, ctx=667, majf=0, minf=1 00:13:10.955 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 issued rwts: total=0,663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.955 job22: (groupid=0, jobs=1): err= 0: pid=74430: Wed Jul 24 19:51:37 2024 00:13:10.955 write: IOPS=64, BW=16.1MiB/s (16.9MB/s)(165MiB/10202msec); 0 zone resets 00:13:10.955 slat (usec): min=24, max=205, avg=68.76, stdev=18.49 00:13:10.955 clat (msec): min=21, max=412, avg=247.29, stdev=30.53 00:13:10.955 lat (msec): min=21, max=412, avg=247.35, stdev=30.53 00:13:10.955 clat percentiles (msec): 00:13:10.955 | 1.00th=[ 105], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.955 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.955 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.955 | 99.00th=[ 330], 99.50th=[ 372], 99.90th=[ 414], 99.95th=[ 414], 00:13:10.955 | 99.99th=[ 414] 00:13:10.955 bw ( KiB/s): min=14307, max=18468, per=3.32%, avg=16486.80, stdev=1121.15, samples=20 00:13:10.955 iops : min= 55, max= 72, avg=64.15, stdev= 4.52, samples=20 00:13:10.955 lat (msec) : 50=0.46%, 100=0.46%, 250=43.10%, 500=55.99% 00:13:10.955 cpu : usr=0.36%, sys=0.24%, ctx=665, majf=0, minf=1 00:13:10.955 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.955 job23: (groupid=0, jobs=1): err= 0: pid=74431: Wed Jul 24 19:51:37 2024 00:13:10.955 write: IOPS=64, BW=16.2MiB/s (16.9MB/s)(165MiB/10209msec); 0 zone resets 00:13:10.955 slat (usec): min=24, max=263, avg=64.28, stdev=18.61 00:13:10.955 clat (msec): min=12, max=412, avg=247.10, stdev=31.29 00:13:10.955 lat (msec): min=12, max=412, avg=247.17, stdev=31.29 00:13:10.955 clat percentiles (msec): 00:13:10.955 | 1.00th=[ 97], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.955 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.955 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.955 | 99.00th=[ 326], 99.50th=[ 372], 99.90th=[ 414], 99.95th=[ 414], 00:13:10.955 | 99.99th=[ 414] 00:13:10.955 bw ( KiB/s): min=14848, max=18432, per=3.33%, avg=16507.00, stdev=1112.15, samples=20 00:13:10.955 iops : min= 58, max= 72, avg=64.35, stdev= 4.36, samples=20 00:13:10.955 lat (msec) : 20=0.15%, 50=0.30%, 100=0.61%, 250=42.88%, 500=56.06% 00:13:10.955 cpu : usr=0.21%, sys=0.33%, ctx=685, majf=0, minf=1 00:13:10.955 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 issued rwts: total=0,660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.955 job24: (groupid=0, jobs=1): err= 0: pid=74432: Wed Jul 24 19:51:37 2024 00:13:10.955 write: IOPS=64, BW=16.1MiB/s (16.9MB/s)(165MiB/10205msec); 0 zone resets 00:13:10.955 slat (usec): min=35, max=168, avg=69.88, stdev=16.88 00:13:10.955 clat (msec): min=23, max=414, avg=247.37, stdev=30.34 00:13:10.955 lat (msec): min=24, max=414, avg=247.44, stdev=30.34 00:13:10.955 clat percentiles (msec): 00:13:10.955 | 1.00th=[ 108], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.955 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.955 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.955 | 99.00th=[ 330], 99.50th=[ 372], 99.90th=[ 414], 99.95th=[ 414], 00:13:10.955 | 99.99th=[ 414] 00:13:10.955 bw ( KiB/s): min=14336, max=18468, per=3.32%, avg=16488.20, stdev=1125.05, samples=20 00:13:10.955 iops : min= 56, max= 72, avg=64.40, stdev= 4.38, samples=20 00:13:10.955 lat (msec) : 50=0.30%, 100=0.61%, 250=43.10%, 500=55.99% 00:13:10.955 cpu : usr=0.35%, sys=0.25%, ctx=662, majf=0, minf=1 00:13:10.955 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.955 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.955 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.955 job25: (groupid=0, jobs=1): err= 0: pid=74433: Wed Jul 24 19:51:37 2024 00:13:10.955 write: IOPS=64, BW=16.2MiB/s (17.0MB/s)(165MiB/10190msec); 0 zone resets 00:13:10.955 slat (usec): min=22, max=154, avg=71.55, stdev=16.39 00:13:10.955 clat (msec): min=23, max=397, avg=247.03, stdev=29.50 00:13:10.955 lat (msec): min=23, max=397, avg=247.10, stdev=29.51 00:13:10.955 clat percentiles (msec): 00:13:10.955 | 1.00th=[ 110], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.955 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.955 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.955 | 99.00th=[ 313], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 397], 00:13:10.955 | 99.99th=[ 397] 00:13:10.955 bw ( KiB/s): min=14848, max=18432, per=3.32%, avg=16483.10, stdev=1084.53, samples=20 00:13:10.955 iops : min= 58, max= 72, avg=64.30, stdev= 4.26, samples=20 00:13:10.955 lat (msec) : 50=0.30%, 100=0.61%, 250=43.10%, 500=55.99% 00:13:10.956 cpu : usr=0.36%, sys=0.26%, ctx=661, majf=0, minf=1 00:13:10.956 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.956 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.956 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.956 job26: (groupid=0, jobs=1): err= 0: pid=74434: Wed Jul 24 19:51:37 2024 00:13:10.956 write: IOPS=64, BW=16.2MiB/s (16.9MB/s)(165MiB/10198msec); 0 zone resets 00:13:10.956 slat (usec): min=24, max=241, avg=60.59, stdev=16.79 00:13:10.956 clat (msec): min=23, max=406, avg=247.22, stdev=29.92 00:13:10.956 lat (msec): min=23, max=406, avg=247.28, stdev=29.92 00:13:10.956 clat percentiles (msec): 00:13:10.956 | 1.00th=[ 109], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.956 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.956 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.956 | 99.00th=[ 321], 99.50th=[ 363], 99.90th=[ 405], 99.95th=[ 405], 00:13:10.956 | 99.99th=[ 405] 00:13:10.956 bw ( KiB/s): min=14364, max=18468, per=3.32%, avg=16473.55, stdev=1108.03, samples=20 00:13:10.956 iops : min= 56, max= 72, avg=64.20, stdev= 4.31, samples=20 00:13:10.956 lat (msec) : 50=0.30%, 100=0.61%, 250=42.79%, 500=56.30% 00:13:10.956 cpu : usr=0.23%, sys=0.28%, ctx=678, majf=0, minf=1 00:13:10.956 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.956 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.956 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.956 job27: (groupid=0, jobs=1): err= 0: pid=74435: Wed Jul 24 19:51:37 2024 00:13:10.956 write: IOPS=64, BW=16.1MiB/s (16.9MB/s)(165MiB/10203msec); 0 zone resets 00:13:10.956 slat (usec): min=42, max=196, avg=73.31, stdev=16.01 00:13:10.956 clat (msec): min=22, max=414, avg=247.31, stdev=30.56 00:13:10.956 lat (msec): min=22, max=414, avg=247.39, stdev=30.56 00:13:10.956 clat percentiles (msec): 00:13:10.956 | 1.00th=[ 105], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.956 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.956 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.956 | 99.00th=[ 330], 99.50th=[ 372], 99.90th=[ 414], 99.95th=[ 414], 00:13:10.956 | 99.99th=[ 414] 00:13:10.956 bw ( KiB/s): min=14336, max=18468, per=3.32%, avg=16485.20, stdev=1119.83, samples=20 00:13:10.956 iops : min= 56, max= 72, avg=64.25, stdev= 4.39, samples=20 00:13:10.956 lat (msec) : 50=0.46%, 100=0.46%, 250=43.10%, 500=55.99% 00:13:10.956 cpu : usr=0.31%, sys=0.33%, ctx=659, majf=0, minf=1 00:13:10.956 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.956 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.956 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.956 job28: (groupid=0, jobs=1): err= 0: pid=74436: Wed Jul 24 19:51:37 2024 00:13:10.956 write: IOPS=64, BW=16.1MiB/s (16.9MB/s)(165MiB/10208msec); 0 zone resets 00:13:10.956 slat (usec): min=36, max=6750, avg=80.76, stdev=318.23 00:13:10.956 clat (msec): min=20, max=412, avg=247.26, stdev=30.59 00:13:10.956 lat (msec): min=27, max=412, avg=247.34, stdev=30.51 00:13:10.956 clat percentiles (msec): 00:13:10.956 | 1.00th=[ 105], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.956 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.956 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.956 | 99.00th=[ 330], 99.50th=[ 372], 99.90th=[ 414], 99.95th=[ 414], 00:13:10.956 | 99.99th=[ 414] 00:13:10.956 bw ( KiB/s): min=14848, max=18432, per=3.32%, avg=16483.00, stdev=1108.48, samples=20 00:13:10.956 iops : min= 58, max= 72, avg=64.25, stdev= 4.34, samples=20 00:13:10.956 lat (msec) : 50=0.46%, 100=0.46%, 250=42.94%, 500=56.15% 00:13:10.956 cpu : usr=0.24%, sys=0.29%, ctx=675, majf=0, minf=1 00:13:10.956 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.956 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.956 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.956 job29: (groupid=0, jobs=1): err= 0: pid=74437: Wed Jul 24 19:51:37 2024 00:13:10.956 write: IOPS=64, BW=16.2MiB/s (17.0MB/s)(165MiB/10189msec); 0 zone resets 00:13:10.956 slat (usec): min=32, max=215, avg=70.01, stdev=17.12 00:13:10.956 clat (msec): min=24, max=395, avg=247.01, stdev=29.34 00:13:10.956 lat (msec): min=24, max=395, avg=247.08, stdev=29.34 00:13:10.956 clat percentiles (msec): 00:13:10.956 | 1.00th=[ 110], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 228], 00:13:10.956 | 30.00th=[ 232], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 255], 00:13:10.956 | 70.00th=[ 259], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 279], 00:13:10.956 | 99.00th=[ 313], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 397], 00:13:10.956 | 99.99th=[ 397] 00:13:10.956 bw ( KiB/s): min=14336, max=18432, per=3.32%, avg=16484.75, stdev=1110.36, samples=20 00:13:10.956 iops : min= 56, max= 72, avg=64.30, stdev= 4.35, samples=20 00:13:10.956 lat (msec) : 50=0.30%, 100=0.61%, 250=43.10%, 500=55.99% 00:13:10.956 cpu : usr=0.27%, sys=0.35%, ctx=662, majf=0, minf=1 00:13:10.956 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=97.7%, 32=0.0%, >=64=0.0% 00:13:10.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.956 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.956 issued rwts: total=0,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.956 latency : target=0, window=0, percentile=100.00%, depth=16 00:13:10.956 00:13:10.956 Run status group 0 (all jobs): 00:13:10.956 WRITE: bw=484MiB/s (508MB/s), 16.1MiB/s-16.4MiB/s (16.9MB/s-17.1MB/s), io=4952MiB (5192MB), run=10189-10220msec 00:13:10.956 00:13:10.956 Disk stats (read/write): 00:13:10.956 sda: ios=48/643, merge=0/0, ticks=212/158072, in_queue=158285, util=94.39% 00:13:10.956 sdb: ios=48/644, merge=0/0, ticks=189/158191, in_queue=158380, util=94.60% 00:13:10.956 sdc: ios=48/655, merge=0/0, ticks=209/158715, in_queue=158925, util=95.13% 00:13:10.956 sdd: ios=48/644, merge=0/0, ticks=212/158207, in_queue=158418, util=94.88% 00:13:10.956 sde: ios=48/643, merge=0/0, ticks=195/158082, in_queue=158276, util=94.75% 00:13:10.956 sdf: ios=48/645, merge=0/0, ticks=160/158372, in_queue=158531, util=95.14% 00:13:10.956 sdg: ios=48/643, merge=0/0, ticks=213/158043, in_queue=158256, util=95.13% 00:13:10.956 sdh: ios=39/648, merge=0/0, ticks=125/158593, in_queue=158718, util=95.75% 00:13:10.956 sdi: ios=27/644, merge=0/0, ticks=117/158224, in_queue=158341, util=95.23% 00:13:10.956 sdj: ios=19/644, merge=0/0, ticks=173/158167, in_queue=158340, util=95.44% 00:13:10.956 sdk: ios=14/645, merge=0/0, ticks=75/158417, in_queue=158493, util=95.43% 00:13:10.956 sdl: ios=0/645, merge=0/0, ticks=0/158426, in_queue=158425, util=95.56% 00:13:10.956 sdm: ios=0/643, merge=0/0, ticks=0/158075, in_queue=158075, util=95.57% 00:13:10.956 sdn: ios=0/647, merge=0/0, ticks=0/158448, in_queue=158448, util=96.19% 00:13:10.956 sdo: ios=0/644, merge=0/0, ticks=0/158213, in_queue=158213, util=96.14% 00:13:10.956 sdp: ios=0/650, merge=0/0, ticks=0/158619, in_queue=158619, util=96.75% 00:13:10.956 sdq: ios=0/645, merge=0/0, ticks=0/158364, in_queue=158364, util=96.72% 00:13:10.956 sdr: ios=0/644, merge=0/0, ticks=0/158253, in_queue=158252, util=96.99% 00:13:10.956 sds: ios=0/645, merge=0/0, ticks=0/158424, in_queue=158423, util=97.18% 00:13:10.956 sdt: ios=0/647, merge=0/0, ticks=0/158492, in_queue=158492, util=97.53% 00:13:10.956 sdu: ios=0/651, merge=0/0, ticks=0/158655, in_queue=158655, util=97.72% 00:13:10.956 sdv: ios=0/649, merge=0/0, ticks=0/158452, in_queue=158452, util=97.89% 00:13:10.956 sdw: ios=0/644, merge=0/0, ticks=0/158232, in_queue=158231, util=97.77% 00:13:10.956 sdx: ios=0/645, merge=0/0, ticks=0/158301, in_queue=158301, util=98.00% 00:13:10.956 sdy: ios=0/644, merge=0/0, ticks=0/158278, in_queue=158278, util=97.98% 00:13:10.956 sdz: ios=0/643, merge=0/0, ticks=0/158066, in_queue=158066, util=97.94% 00:13:10.956 sdaa: ios=0/643, merge=0/0, ticks=0/158009, in_queue=158009, util=98.25% 00:13:10.956 sdab: ios=0/644, merge=0/0, ticks=0/158234, in_queue=158234, util=98.36% 00:13:10.956 sdac: ios=0/644, merge=0/0, ticks=0/158164, in_queue=158164, util=98.44% 00:13:10.956 sdad: ios=0/642, merge=0/0, ticks=0/157859, in_queue=157859, util=98.71% 00:13:10.956 [2024-07-24 19:51:37.979079] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 [2024-07-24 19:51:37.983735] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 [2024-07-24 19:51:37.987407] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 [2024-07-24 19:51:37.991054] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 [2024-07-24 19:51:37.994255] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 [2024-07-24 19:51:37.997238] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 19:51:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@79 -- # sync 00:13:10.956 [2024-07-24 19:51:38.001224] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 [2024-07-24 19:51:38.007069] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 [2024-07-24 19:51:38.010303] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 [2024-07-24 19:51:38.013410] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 [2024-07-24 19:51:38.016692] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 19:51:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:13:10.956 19:51:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@83 -- # rm -f 00:13:10.956 [2024-07-24 19:51:38.021718] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 19:51:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@84 -- # iscsicleanup 00:13:10.956 19:51:38 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@982 -- # echo 'Cleaning up iSCSI connection' 00:13:10.956 Cleaning up iSCSI connection 00:13:10.956 19:51:38 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@983 -- # iscsiadm -m node --logout 00:13:10.956 [2024-07-24 19:51:38.025934] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 [2024-07-24 19:51:38.029603] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:10.956 Logging out of session [sid: 33, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 34, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 35, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 36, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 37, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 38, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 39, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 40, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 41, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 42, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 43, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 44, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 45, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 46, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 47, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 48, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 49, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 50, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 51, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 52, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 53, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 54, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 55, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 56, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 57, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 58, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 59, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 60, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 61, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:13:10.957 Logging out of session [sid: 62, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:13:10.957 Logout of [sid: 33, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 34, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 35, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 36, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 37, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 38, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 39, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 40, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 41, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 42, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 43, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 44, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 45, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 46, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 47, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 48, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 49, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 50, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 51, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 52, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 53, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 54, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 55, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 56, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 57, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 58, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 59, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 60, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 61, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:13:10.957 Logout of [sid: 62, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:13:10.957 19:51:38 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@984 -- # iscsiadm -m node -o delete 00:13:10.957 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@985 -- # rm -rf 00:13:10.957 INFO: Removing lvol bdevs 00:13:10.957 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@85 -- # remove_backends 00:13:10.957 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@22 -- # echo 'INFO: Removing lvol bdevs' 00:13:10.957 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # seq 1 30 00:13:10.957 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:10.957 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_1 00:13:10.957 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_1 00:13:10.957 [2024-07-24 19:51:39.301360] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (8fa12143-11c4-4d03-acdb-2e5ce6786ee6) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:10.957 INFO: lvol bdev lvs0/lbd_1 removed 00:13:10.957 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_1 removed' 00:13:10.957 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:10.957 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_2 00:13:10.957 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_2 00:13:10.957 [2024-07-24 19:51:39.601457] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (d8a2c8ed-1ce2-4362-a212-ae7771fa0e65) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:11.215 INFO: lvol bdev lvs0/lbd_2 removed 00:13:11.215 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_2 removed' 00:13:11.215 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:11.215 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_3 00:13:11.215 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_3 00:13:11.473 [2024-07-24 19:51:39.901631] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7c81bd68-9f39-4c68-9b60-de43abc21725) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:11.473 INFO: lvol bdev lvs0/lbd_3 removed 00:13:11.473 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_3 removed' 00:13:11.473 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:11.473 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_4 00:13:11.473 19:51:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_4 00:13:11.731 [2024-07-24 19:51:40.193746] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (45bc263f-cc8a-4d7b-9c50-6670b469e3e1) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:11.731 INFO: lvol bdev lvs0/lbd_4 removed 00:13:11.732 19:51:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_4 removed' 00:13:11.732 19:51:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:11.732 19:51:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_5 00:13:11.732 19:51:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_5 00:13:11.990 [2024-07-24 19:51:40.425827] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (fa72fa05-5d24-40ba-8f85-c9fca4401a09) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:11.990 INFO: lvol bdev lvs0/lbd_5 removed 00:13:11.990 19:51:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_5 removed' 00:13:11.990 19:51:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:11.990 19:51:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_6 00:13:11.990 19:51:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_6 00:13:12.247 [2024-07-24 19:51:40.678128] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (17babbb5-474b-42d9-aad4-565f254fe2ef) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:12.247 INFO: lvol bdev lvs0/lbd_6 removed 00:13:12.247 19:51:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_6 removed' 00:13:12.247 19:51:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:12.247 19:51:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_7 00:13:12.247 19:51:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_7 00:13:12.541 [2024-07-24 19:51:40.974236] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (42134c3f-e1af-4878-b286-590e4363ca3d) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:12.541 INFO: lvol bdev lvs0/lbd_7 removed 00:13:12.541 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_7 removed' 00:13:12.541 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:12.541 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_8 00:13:12.541 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_8 00:13:12.799 [2024-07-24 19:51:41.262321] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (e7309cca-4bda-4035-a83f-27fa38705f35) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:12.799 INFO: lvol bdev lvs0/lbd_8 removed 00:13:12.799 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_8 removed' 00:13:12.799 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:12.799 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_9 00:13:12.799 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_9 00:13:13.056 [2024-07-24 19:51:41.490409] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (03d99d8e-0579-4819-b915-c1dc100a0f4e) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:13.056 INFO: lvol bdev lvs0/lbd_9 removed 00:13:13.056 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_9 removed' 00:13:13.056 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:13.056 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_10 00:13:13.056 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_10 00:13:13.314 [2024-07-24 19:51:41.794510] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (5e325068-e2b8-42ec-8ac7-6643181f0755) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:13.314 INFO: lvol bdev lvs0/lbd_10 removed 00:13:13.314 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_10 removed' 00:13:13.314 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:13.314 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_11 00:13:13.314 19:51:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_11 00:13:13.573 [2024-07-24 19:51:42.030628] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0ab1a8c0-d3dc-4e98-8577-3e928ab9ecee) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:13.573 INFO: lvol bdev lvs0/lbd_11 removed 00:13:13.573 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_11 removed' 00:13:13.573 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:13.573 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_12 00:13:13.573 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_12 00:13:13.832 [2024-07-24 19:51:42.282947] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b5965e14-ca91-436b-8ff0-52e6eac4fae7) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:13.832 INFO: lvol bdev lvs0/lbd_12 removed 00:13:13.832 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_12 removed' 00:13:13.832 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:13.832 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_13 00:13:13.832 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_13 00:13:14.090 [2024-07-24 19:51:42.519014] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b47665fa-0cf1-41d6-8c3a-2f03fb430837) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:14.090 INFO: lvol bdev lvs0/lbd_13 removed 00:13:14.090 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_13 removed' 00:13:14.090 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:14.090 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_14 00:13:14.090 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_14 00:13:14.349 [2024-07-24 19:51:42.759083] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (cecfc10d-e289-4eb8-a0a0-d6f618a6214b) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:14.349 INFO: lvol bdev lvs0/lbd_14 removed 00:13:14.349 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_14 removed' 00:13:14.349 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:14.349 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_15 00:13:14.349 19:51:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_15 00:13:14.349 [2024-07-24 19:51:43.011216] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (066e2fd0-10f1-437f-9105-d65e889b22ff) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:14.607 INFO: lvol bdev lvs0/lbd_15 removed 00:13:14.607 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_15 removed' 00:13:14.607 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:14.607 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_16 00:13:14.607 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_16 00:13:14.607 [2024-07-24 19:51:43.255298] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (c93a650a-3dbe-4546-8e90-499538f8888a) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:14.866 INFO: lvol bdev lvs0/lbd_16 removed 00:13:14.866 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_16 removed' 00:13:14.866 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:14.866 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_17 00:13:14.866 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_17 00:13:14.866 [2024-07-24 19:51:43.467376] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (db6aa37d-156f-4ff2-9257-465b8b435b27) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:14.866 INFO: lvol bdev lvs0/lbd_17 removed 00:13:14.866 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_17 removed' 00:13:14.866 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:14.866 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_18 00:13:14.866 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_18 00:13:15.162 [2024-07-24 19:51:43.687457] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (02b66384-b229-4e46-aa59-65352ceb62bf) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:15.162 INFO: lvol bdev lvs0/lbd_18 removed 00:13:15.162 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_18 removed' 00:13:15.162 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:15.162 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_19 00:13:15.162 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_19 00:13:15.420 [2024-07-24 19:51:43.919559] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (3ab90269-491d-40eb-ac1d-2a86e6415447) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:15.420 INFO: lvol bdev lvs0/lbd_19 removed 00:13:15.420 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_19 removed' 00:13:15.420 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:15.420 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_20 00:13:15.420 19:51:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_20 00:13:15.678 [2024-07-24 19:51:44.151643] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (edd0aa01-e334-4266-bc1e-da745202b2f4) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:15.678 INFO: lvol bdev lvs0/lbd_20 removed 00:13:15.678 19:51:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_20 removed' 00:13:15.678 19:51:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:15.678 19:51:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_21 00:13:15.678 19:51:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_21 00:13:15.936 [2024-07-24 19:51:44.375721] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (1186ffb4-0be4-48bd-82b3-f2850a32cc95) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:15.936 INFO: lvol bdev lvs0/lbd_21 removed 00:13:15.936 19:51:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_21 removed' 00:13:15.936 19:51:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:15.936 19:51:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_22 00:13:15.936 19:51:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_22 00:13:16.195 [2024-07-24 19:51:44.691854] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (182c9026-45bb-4cb1-b899-042115318b6d) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:16.195 19:51:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_22 removed' 00:13:16.195 INFO: lvol bdev lvs0/lbd_22 removed 00:13:16.195 19:51:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:16.195 19:51:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_23 00:13:16.195 19:51:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_23 00:13:16.453 [2024-07-24 19:51:44.979935] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (78043ea0-6878-4ed2-a071-c82161484d89) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:16.453 INFO: lvol bdev lvs0/lbd_23 removed 00:13:16.453 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_23 removed' 00:13:16.453 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:16.453 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_24 00:13:16.454 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_24 00:13:16.712 [2024-07-24 19:51:45.196027] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0ca38c0b-aa54-46ea-83d1-0a66ee12c132) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:16.712 INFO: lvol bdev lvs0/lbd_24 removed 00:13:16.712 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_24 removed' 00:13:16.712 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:16.712 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_25 00:13:16.712 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_25 00:13:16.971 [2024-07-24 19:51:45.408162] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b1403173-d687-4149-aab0-1e3a68705fc9) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:16.971 INFO: lvol bdev lvs0/lbd_25 removed 00:13:16.971 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_25 removed' 00:13:16.971 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:16.971 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_26 00:13:16.971 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_26 00:13:16.971 [2024-07-24 19:51:45.616260] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (20a84d10-2f49-4c89-8e8d-e52fcf658ef6) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:17.229 INFO: lvol bdev lvs0/lbd_26 removed 00:13:17.229 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_26 removed' 00:13:17.229 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:17.229 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_27 00:13:17.229 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_27 00:13:17.229 [2024-07-24 19:51:45.820346] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (a439cacc-0190-46bf-af30-d30d98825071) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:17.229 INFO: lvol bdev lvs0/lbd_27 removed 00:13:17.229 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_27 removed' 00:13:17.229 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:17.229 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_28 00:13:17.229 19:51:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_28 00:13:17.487 [2024-07-24 19:51:46.012386] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (1602db61-f502-44f1-a849-e7dd524f7753) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:17.487 INFO: lvol bdev lvs0/lbd_28 removed 00:13:17.487 19:51:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_28 removed' 00:13:17.487 19:51:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:17.487 19:51:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_29 00:13:17.487 19:51:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_29 00:13:17.745 [2024-07-24 19:51:46.304524] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (bdccf3f9-e7a3-4470-be55-b1bc093468fb) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:17.745 INFO: lvol bdev lvs0/lbd_29 removed 00:13:17.745 19:51:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_29 removed' 00:13:17.745 19:51:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:13:17.745 19:51:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_30 00:13:17.745 19:51:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_30 00:13:18.003 [2024-07-24 19:51:46.492585] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7f3caa05-fa25-4c9c-8d34-c8e4e87d828f) received event(SPDK_BDEV_EVENT_REMOVE) 00:13:18.003 INFO: lvol bdev lvs0/lbd_30 removed 00:13:18.003 19:51:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_30 removed' 00:13:18.003 19:51:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@28 -- # sleep 1 00:13:18.936 INFO: Removing lvol stores 00:13:18.936 19:51:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@30 -- # echo 'INFO: Removing lvol stores' 00:13:18.936 19:51:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:13:19.193 19:51:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@32 -- # echo 'INFO: lvol store lvs0 removed' 00:13:19.193 INFO: lvol store lvs0 removed 00:13:19.193 INFO: Removing NVMe 00:13:19.193 19:51:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@34 -- # echo 'INFO: Removing NVMe' 00:13:19.193 19:51:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:21.092 19:51:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@37 -- # return 0 00:13:21.092 19:51:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@86 -- # killprocess 72534 00:13:21.092 19:51:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 72534 ']' 00:13:21.092 19:51:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # kill -0 72534 00:13:21.092 19:51:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@955 -- # uname 00:13:21.092 19:51:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.092 19:51:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72534 00:13:21.092 killing process with pid 72534 00:13:21.092 19:51:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:21.092 19:51:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:21.092 19:51:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72534' 00:13:21.092 19:51:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@969 -- # kill 72534 00:13:21.092 19:51:49 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@974 -- # wait 72534 00:13:21.659 19:51:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@87 -- # iscsitestfini 00:13:21.659 19:51:50 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:13:21.659 ************************************ 00:13:21.659 END TEST iscsi_tgt_multiconnection 00:13:21.659 ************************************ 00:13:21.659 00:13:21.659 real 0m53.403s 00:13:21.659 user 1m5.577s 00:13:21.659 sys 0m16.479s 00:13:21.659 19:51:50 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.659 19:51:50 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@46 -- # '[' 0 -eq 1 ']' 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@49 -- # '[' 0 -eq 1 ']' 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@57 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@59 -- # '[' 0 -eq 1 ']' 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@65 -- # cleanup_veth_interfaces 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/common.sh@95 -- # ip link set init_br nomaster 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/common.sh@96 -- # ip link set tgt_br nomaster 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/common.sh@97 -- # ip link set tgt_br2 nomaster 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/common.sh@98 -- # ip link set init_br down 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/common.sh@99 -- # ip link set tgt_br down 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/common.sh@100 -- # ip link set tgt_br2 down 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/common.sh@101 -- # ip link delete iscsi_br type bridge 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/common.sh@102 -- # ip link delete spdk_init_int 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/common.sh@103 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/common.sh@104 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/common.sh@105 -- # ip netns del spdk_iscsi_ns 00:13:21.659 19:51:50 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:13:21.659 00:13:21.659 real 6m10.284s 00:13:21.659 user 12m8.968s 00:13:21.659 sys 1m42.713s 00:13:21.659 ************************************ 00:13:21.659 END TEST iscsi_tgt 00:13:21.659 ************************************ 00:13:21.659 19:51:50 iscsi_tgt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.659 19:51:50 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:13:21.659 19:51:50 -- spdk/autotest.sh@268 -- # run_test spdkcli_iscsi /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:13:21.659 19:51:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:21.659 19:51:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.659 19:51:50 -- common/autotest_common.sh@10 -- # set +x 00:13:21.659 ************************************ 00:13:21.659 START TEST spdkcli_iscsi 00:13:21.659 ************************************ 00:13:21.659 19:51:50 spdkcli_iscsi -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:13:21.918 * Looking for test storage... 00:13:21.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:13:21.918 19:51:50 spdkcli_iscsi -- spdkcli/iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:13:21.918 19:51:50 spdkcli_iscsi -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:13:21.918 19:51:50 spdkcli_iscsi -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:13:21.918 19:51:50 spdkcli_iscsi -- spdkcli/iscsi.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:13:21.918 19:51:50 spdkcli_iscsi -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:13:21.918 19:51:50 spdkcli_iscsi -- spdkcli/iscsi.sh@12 -- # MATCH_FILE=spdkcli_iscsi.test 00:13:21.918 19:51:50 spdkcli_iscsi -- spdkcli/iscsi.sh@13 -- # SPDKCLI_BRANCH=/iscsi 00:13:21.918 19:51:50 spdkcli_iscsi -- spdkcli/iscsi.sh@15 -- # trap cleanup EXIT 00:13:21.918 19:51:50 spdkcli_iscsi -- spdkcli/iscsi.sh@17 -- # timing_enter run_iscsi_tgt 00:13:21.918 19:51:50 spdkcli_iscsi -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:21.918 19:51:50 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:13:21.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.918 19:51:50 spdkcli_iscsi -- spdkcli/iscsi.sh@21 -- # iscsi_tgt_pid=75057 00:13:21.918 19:51:50 spdkcli_iscsi -- spdkcli/iscsi.sh@22 -- # waitforlisten 75057 00:13:21.918 19:51:50 spdkcli_iscsi -- common/autotest_common.sh@831 -- # '[' -z 75057 ']' 00:13:21.918 19:51:50 spdkcli_iscsi -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.918 19:51:50 spdkcli_iscsi -- spdkcli/iscsi.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x3 -p 0 --wait-for-rpc 00:13:21.918 19:51:50 spdkcli_iscsi -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.918 19:51:50 spdkcli_iscsi -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.918 19:51:50 spdkcli_iscsi -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.918 19:51:50 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:13:21.918 [2024-07-24 19:51:50.493258] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:13:21.918 [2024-07-24 19:51:50.493402] [ DPDK EAL parameters: iscsi --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75057 ] 00:13:22.176 [2024-07-24 19:51:50.642150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:22.176 [2024-07-24 19:51:50.806929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.176 [2024-07-24 19:51:50.806929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.108 19:51:51 spdkcli_iscsi -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:23.108 19:51:51 spdkcli_iscsi -- common/autotest_common.sh@864 -- # return 0 00:13:23.108 19:51:51 spdkcli_iscsi -- spdkcli/iscsi.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:23.108 [2024-07-24 19:51:51.725291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:23.365 19:51:52 spdkcli_iscsi -- spdkcli/iscsi.sh@25 -- # timing_exit run_iscsi_tgt 00:13:23.365 19:51:52 spdkcli_iscsi -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:23.365 19:51:52 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:13:23.622 19:51:52 spdkcli_iscsi -- spdkcli/iscsi.sh@27 -- # timing_enter spdkcli_create_iscsi_config 00:13:23.623 19:51:52 spdkcli_iscsi -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:23.623 19:51:52 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:13:23.623 19:51:52 spdkcli_iscsi -- spdkcli/iscsi.sh@48 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc0'\'' '\''Malloc0'\'' True 00:13:23.623 '\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:13:23.623 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:13:23.623 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:13:23.623 '\''/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"'\'' '\''host=127.0.0.1, port=3261'\'' True 00:13:23.623 '\''/iscsi/portal_groups create 2 127.0.0.1:3262'\'' '\''host=127.0.0.1, port=3262'\'' True 00:13:23.623 '\''/iscsi/initiator_groups create 2 ANY 10.0.2.15/32'\'' '\''hostname=ANY, netmask=10.0.2.15/32'\'' True 00:13:23.623 '\''/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32'\'' '\''hostname=ANZ, netmask=10.0.2.15/32'\'' True 00:13:23.623 '\''/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32'\'' '\''hostname=ANW, netmask=10.0.2.16'\'' True 00:13:23.623 '\''/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1'\'' '\''Target0'\'' True 00:13:23.623 '\''/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1'\'' '\''Target1'\'' True 00:13:23.623 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' True 00:13:23.623 '\''/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2'\'' '\''Malloc3'\'' True 00:13:23.623 '\''/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"'\'' '\''user=test3'\'' True 00:13:23.623 '\''/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2'\'' '\''user=test2'\'' True 00:13:23.623 '\''/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"'\'' '\''user=test4'\'' True 00:13:23.623 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true'\'' '\''disable_chap: True'\'' True 00:13:23.623 '\''/iscsi/global_params set_auth g=1 d=true r=false'\'' '\''disable_chap: True'\'' True 00:13:23.623 '\''/iscsi ls'\'' '\''Malloc'\'' True 00:13:23.623 ' 00:13:31.864 Executing command: ['/bdevs/malloc create 32 512 Malloc0', 'Malloc0', True] 00:13:31.864 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:13:31.864 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:13:31.864 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:13:31.864 Executing command: ['/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"', 'host=127.0.0.1, port=3261', True] 00:13:31.864 Executing command: ['/iscsi/portal_groups create 2 127.0.0.1:3262', 'host=127.0.0.1, port=3262', True] 00:13:31.864 Executing command: ['/iscsi/initiator_groups create 2 ANY 10.0.2.15/32', 'hostname=ANY, netmask=10.0.2.15/32', True] 00:13:31.864 Executing command: ['/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32', 'hostname=ANZ, netmask=10.0.2.15/32', True] 00:13:31.864 Executing command: ['/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32', 'hostname=ANW, netmask=10.0.2.16', True] 00:13:31.864 Executing command: ['/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1', 'Target0', True] 00:13:31.864 Executing command: ['/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1', 'Target1', True] 00:13:31.864 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', True] 00:13:31.864 Executing command: ['/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2', 'Malloc3', True] 00:13:31.864 Executing command: ['/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"', 'user=test3', True] 00:13:31.864 Executing command: ['/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2', 'user=test2', True] 00:13:31.864 Executing command: ['/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"', 'user=test4', True] 00:13:31.864 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true', 'disable_chap: True', True] 00:13:31.864 Executing command: ['/iscsi/global_params set_auth g=1 d=true r=false', 'disable_chap: True', True] 00:13:31.864 Executing command: ['/iscsi ls', 'Malloc', True] 00:13:31.864 19:51:59 spdkcli_iscsi -- spdkcli/iscsi.sh@49 -- # timing_exit spdkcli_create_iscsi_config 00:13:31.864 19:51:59 spdkcli_iscsi -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:31.864 19:51:59 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:13:31.864 19:51:59 spdkcli_iscsi -- spdkcli/iscsi.sh@51 -- # timing_enter spdkcli_check_match 00:13:31.864 19:51:59 spdkcli_iscsi -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:31.864 19:51:59 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:13:31.864 19:51:59 spdkcli_iscsi -- spdkcli/iscsi.sh@52 -- # check_match 00:13:31.864 19:51:59 spdkcli_iscsi -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /iscsi 00:13:31.864 19:52:00 spdkcli_iscsi -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test.match 00:13:31.864 19:52:00 spdkcli_iscsi -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test 00:13:31.864 19:52:00 spdkcli_iscsi -- spdkcli/iscsi.sh@53 -- # timing_exit spdkcli_check_match 00:13:31.864 19:52:00 spdkcli_iscsi -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:31.864 19:52:00 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:13:31.864 19:52:00 spdkcli_iscsi -- spdkcli/iscsi.sh@55 -- # timing_enter spdkcli_clear_iscsi_config 00:13:31.864 19:52:00 spdkcli_iscsi -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:31.864 19:52:00 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:13:31.864 19:52:00 spdkcli_iscsi -- spdkcli/iscsi.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/iscsi/auth_groups delete_secret 1 test2'\'' '\''user=test2'\'' 00:13:31.864 '\''/iscsi/auth_groups delete_secret_all 1'\'' '\''user=test1'\'' 00:13:31.864 '\''/iscsi/auth_groups delete 1'\'' '\''user=test1'\'' 00:13:31.864 '\''/iscsi/auth_groups delete_all'\'' '\''user=test4'\'' 00:13:31.864 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' 00:13:31.864 '\''/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1'\'' '\''Target1'\'' 00:13:31.864 '\''/iscsi/target_nodes delete_all'\'' '\''Target0'\'' 00:13:31.864 '\''/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32'\'' '\''ANW'\'' 00:13:31.864 '\''/iscsi/initiator_groups delete 3'\'' '\''ANZ'\'' 00:13:31.864 '\''/iscsi/initiator_groups delete_all'\'' '\''ANY'\'' 00:13:31.864 '\''/iscsi/portal_groups delete 1'\'' '\''127.0.0.1:3261'\'' 00:13:31.864 '\''/iscsi/portal_groups delete_all'\'' '\''127.0.0.1:3262'\'' 00:13:31.864 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:13:31.864 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:13:31.864 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:13:31.864 '\''/bdevs/malloc delete Malloc0'\'' '\''Malloc0'\'' 00:13:31.864 ' 00:13:38.430 Executing command: ['/iscsi/auth_groups delete_secret 1 test2', 'user=test2', False] 00:13:38.430 Executing command: ['/iscsi/auth_groups delete_secret_all 1', 'user=test1', False] 00:13:38.430 Executing command: ['/iscsi/auth_groups delete 1', 'user=test1', False] 00:13:38.430 Executing command: ['/iscsi/auth_groups delete_all', 'user=test4', False] 00:13:38.430 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', False] 00:13:38.430 Executing command: ['/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1', 'Target1', False] 00:13:38.430 Executing command: ['/iscsi/target_nodes delete_all', 'Target0', False] 00:13:38.430 Executing command: ['/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32', 'ANW', False] 00:13:38.430 Executing command: ['/iscsi/initiator_groups delete 3', 'ANZ', False] 00:13:38.430 Executing command: ['/iscsi/initiator_groups delete_all', 'ANY', False] 00:13:38.430 Executing command: ['/iscsi/portal_groups delete 1', '127.0.0.1:3261', False] 00:13:38.430 Executing command: ['/iscsi/portal_groups delete_all', '127.0.0.1:3262', False] 00:13:38.430 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:13:38.430 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:13:38.430 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:13:38.430 Executing command: ['/bdevs/malloc delete Malloc0', 'Malloc0', False] 00:13:38.430 19:52:06 spdkcli_iscsi -- spdkcli/iscsi.sh@73 -- # timing_exit spdkcli_clear_iscsi_config 00:13:38.430 19:52:06 spdkcli_iscsi -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:38.430 19:52:06 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:13:38.430 19:52:06 spdkcli_iscsi -- spdkcli/iscsi.sh@75 -- # killprocess 75057 00:13:38.430 19:52:06 spdkcli_iscsi -- common/autotest_common.sh@950 -- # '[' -z 75057 ']' 00:13:38.430 19:52:06 spdkcli_iscsi -- common/autotest_common.sh@954 -- # kill -0 75057 00:13:38.430 19:52:06 spdkcli_iscsi -- common/autotest_common.sh@955 -- # uname 00:13:38.430 19:52:06 spdkcli_iscsi -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:38.430 19:52:06 spdkcli_iscsi -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75057 00:13:38.430 19:52:07 spdkcli_iscsi -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:38.430 19:52:07 spdkcli_iscsi -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:38.430 19:52:07 spdkcli_iscsi -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75057' 00:13:38.430 killing process with pid 75057 00:13:38.430 19:52:07 spdkcli_iscsi -- common/autotest_common.sh@969 -- # kill 75057 00:13:38.430 19:52:07 spdkcli_iscsi -- common/autotest_common.sh@974 -- # wait 75057 00:13:39.361 19:52:07 spdkcli_iscsi -- spdkcli/iscsi.sh@1 -- # cleanup 00:13:39.361 19:52:07 spdkcli_iscsi -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:13:39.361 19:52:07 spdkcli_iscsi -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:13:39.361 19:52:07 spdkcli_iscsi -- spdkcli/common.sh@16 -- # '[' -n 75057 ']' 00:13:39.361 19:52:07 spdkcli_iscsi -- spdkcli/common.sh@17 -- # killprocess 75057 00:13:39.361 19:52:07 spdkcli_iscsi -- common/autotest_common.sh@950 -- # '[' -z 75057 ']' 00:13:39.361 19:52:07 spdkcli_iscsi -- common/autotest_common.sh@954 -- # kill -0 75057 00:13:39.361 Process with pid 75057 is not found 00:13:39.361 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (75057) - No such process 00:13:39.361 19:52:07 spdkcli_iscsi -- common/autotest_common.sh@977 -- # echo 'Process with pid 75057 is not found' 00:13:39.361 19:52:07 spdkcli_iscsi -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:13:39.361 19:52:07 spdkcli_iscsi -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_iscsi.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:13:39.361 ************************************ 00:13:39.361 END TEST spdkcli_iscsi 00:13:39.361 ************************************ 00:13:39.361 00:13:39.361 real 0m17.359s 00:13:39.361 user 0m36.874s 00:13:39.361 sys 0m1.415s 00:13:39.361 19:52:07 spdkcli_iscsi -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.361 19:52:07 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:13:39.361 19:52:07 -- spdk/autotest.sh@271 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:13:39.361 19:52:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:39.361 19:52:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.361 19:52:07 -- common/autotest_common.sh@10 -- # set +x 00:13:39.361 ************************************ 00:13:39.361 START TEST spdkcli_raid 00:13:39.361 ************************************ 00:13:39.361 19:52:07 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:13:39.361 * Looking for test storage... 00:13:39.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:13:39.361 19:52:07 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:13:39.361 19:52:07 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:39.361 19:52:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=75372 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:13:39.361 19:52:07 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 75372 00:13:39.361 19:52:07 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 75372 ']' 00:13:39.361 19:52:07 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.361 19:52:07 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.361 19:52:07 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.361 19:52:07 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.361 19:52:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.361 [2024-07-24 19:52:07.899651] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:13:39.361 [2024-07-24 19:52:07.900103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75372 ] 00:13:39.620 [2024-07-24 19:52:08.043798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:39.620 [2024-07-24 19:52:08.215520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.620 [2024-07-24 19:52:08.215524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.879 [2024-07-24 19:52:08.299899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:40.446 19:52:09 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:40.446 19:52:09 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:13:40.446 19:52:09 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:13:40.446 19:52:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:40.446 19:52:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.446 19:52:09 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:13:40.446 19:52:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:40.446 19:52:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.446 19:52:09 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:13:40.446 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:13:40.446 ' 00:13:42.348 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:13:42.348 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:13:42.348 19:52:10 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:13:42.348 19:52:10 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:42.348 19:52:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.348 19:52:10 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:13:42.348 19:52:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:42.348 19:52:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.349 19:52:10 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:13:42.349 ' 00:13:43.723 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:13:43.723 19:52:12 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:13:43.723 19:52:12 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:43.723 19:52:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.723 19:52:12 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:13:43.723 19:52:12 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.723 19:52:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.723 19:52:12 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:13:43.723 19:52:12 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:13:44.289 19:52:12 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:13:44.289 19:52:12 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:13:44.289 19:52:12 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:13:44.289 19:52:12 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:44.289 19:52:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.289 19:52:12 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:13:44.289 19:52:12 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:44.289 19:52:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.289 19:52:12 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:13:44.289 ' 00:13:45.229 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:13:45.488 19:52:13 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:13:45.488 19:52:13 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.488 19:52:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.488 19:52:14 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:13:45.488 19:52:14 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:45.488 19:52:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.488 19:52:14 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:13:45.488 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:13:45.488 ' 00:13:46.935 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:13:46.935 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:13:46.935 19:52:15 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:13:46.935 19:52:15 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:46.935 19:52:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:46.935 19:52:15 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 75372 00:13:46.935 19:52:15 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 75372 ']' 00:13:46.935 19:52:15 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 75372 00:13:46.935 19:52:15 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:13:46.935 19:52:15 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:46.935 19:52:15 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75372 00:13:46.935 19:52:15 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:46.935 killing process with pid 75372 00:13:46.935 19:52:15 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:46.935 19:52:15 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75372' 00:13:46.935 19:52:15 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 75372 00:13:46.935 19:52:15 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 75372 00:13:47.500 Process with pid 75372 is not found 00:13:47.500 19:52:16 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:13:47.500 19:52:16 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 75372 ']' 00:13:47.500 19:52:16 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 75372 00:13:47.500 19:52:16 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 75372 ']' 00:13:47.500 19:52:16 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 75372 00:13:47.500 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (75372) - No such process 00:13:47.500 19:52:16 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 75372 is not found' 00:13:47.500 19:52:16 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:13:47.500 19:52:16 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:13:47.500 19:52:16 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:13:47.500 19:52:16 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:13:47.500 ************************************ 00:13:47.500 END TEST spdkcli_raid 00:13:47.500 ************************************ 00:13:47.500 00:13:47.500 real 0m8.438s 00:13:47.500 user 0m18.263s 00:13:47.500 sys 0m1.188s 00:13:47.500 19:52:16 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.500 19:52:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:13:47.759 19:52:16 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:13:47.759 19:52:16 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:13:47.759 19:52:16 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:13:47.759 19:52:16 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:13:47.759 19:52:16 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:13:47.759 19:52:16 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:13:47.759 19:52:16 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:13:47.759 19:52:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:47.759 19:52:16 -- common/autotest_common.sh@10 -- # set +x 00:13:47.759 19:52:16 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:13:47.759 19:52:16 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:13:47.759 19:52:16 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:13:47.759 19:52:16 -- common/autotest_common.sh@10 -- # set +x 00:13:49.718 INFO: APP EXITING 00:13:49.718 INFO: killing all VMs 00:13:49.718 INFO: killing vhost app 00:13:49.718 INFO: EXIT DONE 00:13:49.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:49.976 Waiting for block devices as requested 00:13:49.976 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:49.976 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:50.911 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:50.911 Cleaning 00:13:50.911 Removing: /var/run/dpdk/spdk0/config 00:13:50.911 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:13:50.911 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:13:50.911 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:13:50.911 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:13:50.911 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:13:50.911 Removing: /var/run/dpdk/spdk0/hugepage_info 00:13:50.911 Removing: /var/run/dpdk/spdk1/config 00:13:50.911 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:13:50.911 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:13:50.911 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:13:50.911 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:13:50.911 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:13:50.911 Removing: /var/run/dpdk/spdk1/hugepage_info 00:13:50.911 Removing: /dev/shm/iscsi_trace.pid70457 00:13:50.911 Removing: /dev/shm/spdk_tgt_trace.pid58863 00:13:50.911 Removing: /var/run/dpdk/spdk0 00:13:50.911 Removing: /var/run/dpdk/spdk1 00:13:50.911 Removing: /var/run/dpdk/spdk_pid58718 00:13:50.911 Removing: /var/run/dpdk/spdk_pid58863 00:13:50.911 Removing: /var/run/dpdk/spdk_pid59054 00:13:50.911 Removing: /var/run/dpdk/spdk_pid59142 00:13:50.911 Removing: /var/run/dpdk/spdk_pid59164 00:13:50.911 Removing: /var/run/dpdk/spdk_pid59279 00:13:50.911 Removing: /var/run/dpdk/spdk_pid59297 00:13:50.911 Removing: /var/run/dpdk/spdk_pid59415 00:13:50.911 Removing: /var/run/dpdk/spdk_pid59602 00:13:50.911 Removing: /var/run/dpdk/spdk_pid59748 00:13:50.911 Removing: /var/run/dpdk/spdk_pid59813 00:13:50.911 Removing: /var/run/dpdk/spdk_pid59883 00:13:50.911 Removing: /var/run/dpdk/spdk_pid59974 00:13:50.911 Removing: /var/run/dpdk/spdk_pid60046 00:13:50.911 Removing: /var/run/dpdk/spdk_pid60084 00:13:50.911 Removing: /var/run/dpdk/spdk_pid60120 00:13:50.911 Removing: /var/run/dpdk/spdk_pid60181 00:13:50.911 Removing: /var/run/dpdk/spdk_pid60281 00:13:50.911 Removing: /var/run/dpdk/spdk_pid60725 00:13:50.911 Removing: /var/run/dpdk/spdk_pid60782 00:13:50.911 Removing: /var/run/dpdk/spdk_pid60833 00:13:50.911 Removing: /var/run/dpdk/spdk_pid60849 00:13:50.911 Removing: /var/run/dpdk/spdk_pid60927 00:13:50.911 Removing: /var/run/dpdk/spdk_pid60949 00:13:50.911 Removing: /var/run/dpdk/spdk_pid61027 00:13:50.911 Removing: /var/run/dpdk/spdk_pid61043 00:13:50.911 Removing: /var/run/dpdk/spdk_pid61094 00:13:50.911 Removing: /var/run/dpdk/spdk_pid61112 00:13:50.911 Removing: /var/run/dpdk/spdk_pid61152 00:13:50.911 Removing: /var/run/dpdk/spdk_pid61174 00:13:50.911 Removing: /var/run/dpdk/spdk_pid61298 00:13:50.911 Removing: /var/run/dpdk/spdk_pid61339 00:13:50.911 Removing: /var/run/dpdk/spdk_pid61414 00:13:50.911 Removing: /var/run/dpdk/spdk_pid61724 00:13:50.911 Removing: /var/run/dpdk/spdk_pid61741 00:13:50.911 Removing: /var/run/dpdk/spdk_pid61773 00:13:51.170 Removing: /var/run/dpdk/spdk_pid61791 00:13:51.170 Removing: /var/run/dpdk/spdk_pid61812 00:13:51.170 Removing: /var/run/dpdk/spdk_pid61837 00:13:51.170 Removing: /var/run/dpdk/spdk_pid61856 00:13:51.170 Removing: /var/run/dpdk/spdk_pid61877 00:13:51.170 Removing: /var/run/dpdk/spdk_pid61898 00:13:51.170 Removing: /var/run/dpdk/spdk_pid61917 00:13:51.170 Removing: /var/run/dpdk/spdk_pid61938 00:13:51.170 Removing: /var/run/dpdk/spdk_pid61958 00:13:51.170 Removing: /var/run/dpdk/spdk_pid61976 00:13:51.170 Removing: /var/run/dpdk/spdk_pid61997 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62021 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62035 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62056 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62086 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62105 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62129 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62165 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62173 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62208 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62272 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62306 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62320 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62344 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62359 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62367 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62409 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62423 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62451 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62466 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62481 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62491 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62506 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62515 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62530 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62544 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62574 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62606 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62615 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62649 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62661 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62672 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62719 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62736 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62762 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62770 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62783 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62796 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62798 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62811 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62813 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62826 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62900 00:13:51.170 Removing: /var/run/dpdk/spdk_pid62942 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63041 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63080 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63125 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63140 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63156 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63176 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63212 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63229 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63300 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63322 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63373 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63459 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63526 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63557 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63652 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63700 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63738 00:13:51.170 Removing: /var/run/dpdk/spdk_pid63957 00:13:51.170 Removing: /var/run/dpdk/spdk_pid64054 00:13:51.170 Removing: /var/run/dpdk/spdk_pid64088 00:13:51.170 Removing: /var/run/dpdk/spdk_pid64342 00:13:51.170 Removing: /var/run/dpdk/spdk_pid64372 00:13:51.428 Removing: /var/run/dpdk/spdk_pid64391 00:13:51.428 Removing: /var/run/dpdk/spdk_pid64433 00:13:51.428 Removing: /var/run/dpdk/spdk_pid64445 00:13:51.428 Removing: /var/run/dpdk/spdk_pid64462 00:13:51.428 Removing: /var/run/dpdk/spdk_pid64489 00:13:51.428 Removing: /var/run/dpdk/spdk_pid64493 00:13:51.428 Removing: /var/run/dpdk/spdk_pid64544 00:13:51.428 Removing: /var/run/dpdk/spdk_pid64563 00:13:51.428 Removing: /var/run/dpdk/spdk_pid64614 00:13:51.428 Removing: /var/run/dpdk/spdk_pid64706 00:13:51.428 Removing: /var/run/dpdk/spdk_pid65467 00:13:51.428 Removing: /var/run/dpdk/spdk_pid65906 00:13:51.428 Removing: /var/run/dpdk/spdk_pid66179 00:13:51.428 Removing: /var/run/dpdk/spdk_pid66484 00:13:51.428 Removing: /var/run/dpdk/spdk_pid66728 00:13:51.428 Removing: /var/run/dpdk/spdk_pid67347 00:13:51.428 Removing: /var/run/dpdk/spdk_pid68765 00:13:51.428 Removing: /var/run/dpdk/spdk_pid69353 00:13:51.428 Removing: /var/run/dpdk/spdk_pid70113 00:13:51.428 Removing: /var/run/dpdk/spdk_pid70148 00:13:51.428 Removing: /var/run/dpdk/spdk_pid70457 00:13:51.428 Removing: /var/run/dpdk/spdk_pid71716 00:13:51.428 Removing: /var/run/dpdk/spdk_pid72103 00:13:51.428 Removing: /var/run/dpdk/spdk_pid72149 00:13:51.428 Removing: /var/run/dpdk/spdk_pid72534 00:13:51.428 Removing: /var/run/dpdk/spdk_pid75057 00:13:51.428 Removing: /var/run/dpdk/spdk_pid75372 00:13:51.428 Clean 00:13:51.428 19:52:20 -- common/autotest_common.sh@1451 -- # return 0 00:13:51.428 19:52:20 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:13:51.428 19:52:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:51.428 19:52:20 -- common/autotest_common.sh@10 -- # set +x 00:13:51.428 19:52:20 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:13:51.428 19:52:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:51.428 19:52:20 -- common/autotest_common.sh@10 -- # set +x 00:13:51.428 19:52:20 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:13:51.685 19:52:20 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:13:51.685 19:52:20 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:13:51.685 19:52:20 -- spdk/autotest.sh@395 -- # hash lcov 00:13:51.685 19:52:20 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:13:51.685 19:52:20 -- spdk/autotest.sh@397 -- # hostname 00:13:51.685 19:52:20 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:13:51.685 geninfo: WARNING: invalid characters removed from testname! 00:14:23.810 19:52:47 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:14:23.810 19:52:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:14:26.382 19:52:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:14:29.667 19:52:58 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:14:32.953 19:53:00 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:14:35.484 19:53:03 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:14:38.015 19:53:06 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:14:38.015 19:53:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:38.015 19:53:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:14:38.015 19:53:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.015 19:53:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.015 19:53:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.015 19:53:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.015 19:53:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.015 19:53:06 -- paths/export.sh@5 -- $ export PATH 00:14:38.015 19:53:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.015 19:53:06 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:14:38.015 19:53:06 -- common/autobuild_common.sh@447 -- $ date +%s 00:14:38.015 19:53:06 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721850786.XXXXXX 00:14:38.015 19:53:06 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721850786.r79WnI 00:14:38.015 19:53:06 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:14:38.015 19:53:06 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:14:38.015 19:53:06 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:14:38.016 19:53:06 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:14:38.016 19:53:06 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:14:38.016 19:53:06 -- common/autobuild_common.sh@463 -- $ get_config_params 00:14:38.016 19:53:06 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:14:38.016 19:53:06 -- common/autotest_common.sh@10 -- $ set +x 00:14:38.016 19:53:06 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:14:38.016 19:53:06 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:14:38.016 19:53:06 -- pm/common@17 -- $ local monitor 00:14:38.016 19:53:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:38.016 19:53:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:38.016 19:53:06 -- pm/common@25 -- $ sleep 1 00:14:38.016 19:53:06 -- pm/common@21 -- $ date +%s 00:14:38.016 19:53:06 -- pm/common@21 -- $ date +%s 00:14:38.016 19:53:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721850786 00:14:38.016 19:53:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721850786 00:14:38.016 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721850786_collect-vmstat.pm.log 00:14:38.016 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721850786_collect-cpu-load.pm.log 00:14:38.950 19:53:07 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:14:38.950 19:53:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:14:38.950 19:53:07 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:14:38.950 19:53:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:14:38.950 19:53:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:14:38.950 19:53:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:14:38.950 19:53:07 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:14:38.950 19:53:07 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:14:38.950 19:53:07 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:14:39.208 19:53:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:14:39.208 19:53:07 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:14:39.208 19:53:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:14:39.208 19:53:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:14:39.208 19:53:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:39.208 19:53:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:14:39.208 19:53:07 -- pm/common@44 -- $ pid=77193 00:14:39.208 19:53:07 -- pm/common@50 -- $ kill -TERM 77193 00:14:39.208 19:53:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:39.208 19:53:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:14:39.208 19:53:07 -- pm/common@44 -- $ pid=77194 00:14:39.208 19:53:07 -- pm/common@50 -- $ kill -TERM 77194 00:14:39.208 + [[ -n 5173 ]] 00:14:39.208 + sudo kill 5173 00:14:39.218 [Pipeline] } 00:14:39.235 [Pipeline] // timeout 00:14:39.240 [Pipeline] } 00:14:39.258 [Pipeline] // stage 00:14:39.264 [Pipeline] } 00:14:39.284 [Pipeline] // catchError 00:14:39.293 [Pipeline] stage 00:14:39.296 [Pipeline] { (Stop VM) 00:14:39.310 [Pipeline] sh 00:14:39.662 + vagrant halt 00:14:43.869 ==> default: Halting domain... 00:14:49.153 [Pipeline] sh 00:14:49.431 + vagrant destroy -f 00:14:53.610 ==> default: Removing domain... 00:14:53.618 [Pipeline] sh 00:14:53.891 + mv output /var/jenkins/workspace/iscsi-uring-vg-autotest_2/output 00:14:53.899 [Pipeline] } 00:14:53.918 [Pipeline] // stage 00:14:53.925 [Pipeline] } 00:14:53.941 [Pipeline] // dir 00:14:53.947 [Pipeline] } 00:14:53.964 [Pipeline] // wrap 00:14:53.969 [Pipeline] } 00:14:53.983 [Pipeline] // catchError 00:14:53.991 [Pipeline] stage 00:14:53.993 [Pipeline] { (Epilogue) 00:14:54.006 [Pipeline] sh 00:14:54.286 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:15:00.897 [Pipeline] catchError 00:15:00.898 [Pipeline] { 00:15:00.911 [Pipeline] sh 00:15:01.188 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:15:01.446 Artifacts sizes are good 00:15:01.453 [Pipeline] } 00:15:01.471 [Pipeline] // catchError 00:15:01.482 [Pipeline] archiveArtifacts 00:15:01.488 Archiving artifacts 00:15:02.526 [Pipeline] cleanWs 00:15:02.536 [WS-CLEANUP] Deleting project workspace... 00:15:02.536 [WS-CLEANUP] Deferred wipeout is used... 00:15:02.542 [WS-CLEANUP] done 00:15:02.544 [Pipeline] } 00:15:02.562 [Pipeline] // stage 00:15:02.568 [Pipeline] } 00:15:02.584 [Pipeline] // node 00:15:02.590 [Pipeline] End of Pipeline 00:15:02.628 Finished: SUCCESS