00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2024 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3289 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.073 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.102 Fetching changes from the remote Git repository 00:00:00.114 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.132 Using shallow fetch with depth 1 00:00:00.132 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.132 > git --version # timeout=10 00:00:00.157 > git --version # 'git version 2.39.2' 00:00:00.157 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.169 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.169 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.795 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.807 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.818 Checking out Revision 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 (FETCH_HEAD) 00:00:05.818 > git config core.sparsecheckout # timeout=10 00:00:05.827 > git read-tree -mu HEAD # timeout=10 00:00:05.845 > git checkout -f 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=5 00:00:05.864 Commit message: "doc: add chapter about running CI Vagrant images on dev-systems" 00:00:05.864 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:05.976 [Pipeline] Start of Pipeline 00:00:05.994 [Pipeline] library 00:00:05.997 Loading library shm_lib@master 00:00:05.997 Library shm_lib@master is cached. Copying from home. 00:00:06.013 [Pipeline] node 00:00:06.030 Running on VM-host-SM17 in /var/jenkins/workspace/iscsi-vg-autotest 00:00:06.031 [Pipeline] { 00:00:06.042 [Pipeline] catchError 00:00:06.043 [Pipeline] { 00:00:06.053 [Pipeline] wrap 00:00:06.060 [Pipeline] { 00:00:06.068 [Pipeline] stage 00:00:06.069 [Pipeline] { (Prologue) 00:00:06.084 [Pipeline] echo 00:00:06.085 Node: VM-host-SM17 00:00:06.089 [Pipeline] cleanWs 00:00:06.099 [WS-CLEANUP] Deleting project workspace... 00:00:06.099 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.106 [WS-CLEANUP] done 00:00:06.298 [Pipeline] setCustomBuildProperty 00:00:06.381 [Pipeline] httpRequest 00:00:06.415 [Pipeline] echo 00:00:06.416 Sorcerer 10.211.164.101 is alive 00:00:06.422 [Pipeline] httpRequest 00:00:06.428 HttpMethod: GET 00:00:06.428 URL: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:06.431 Sending request to url: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:06.442 Response Code: HTTP/1.1 200 OK 00:00:06.443 Success: Status code 200 is in the accepted range: 200,404 00:00:06.443 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:11.540 [Pipeline] sh 00:00:11.824 + tar --no-same-owner -xf jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:11.840 [Pipeline] httpRequest 00:00:11.868 [Pipeline] echo 00:00:11.870 Sorcerer 10.211.164.101 is alive 00:00:11.879 [Pipeline] httpRequest 00:00:11.883 HttpMethod: GET 00:00:11.884 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:11.885 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:11.907 Response Code: HTTP/1.1 200 OK 00:00:11.908 Success: Status code 200 is in the accepted range: 200,404 00:00:11.908 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:21.737 [Pipeline] sh 00:01:22.016 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:24.560 [Pipeline] sh 00:01:24.841 + git -C spdk log --oneline -n5 00:01:24.841 f7b31b2b9 log: declare g_deprecation_epoch static 00:01:24.841 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:01:24.841 3731556bd lvol: declare g_lvol_if static 00:01:24.841 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:01:24.841 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:01:24.860 [Pipeline] withCredentials 00:01:24.899 > git --version # timeout=10 00:01:24.911 > git --version # 'git version 2.39.2' 00:01:24.927 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:24.930 [Pipeline] { 00:01:24.939 [Pipeline] retry 00:01:24.941 [Pipeline] { 00:01:24.958 [Pipeline] sh 00:01:25.238 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:25.249 [Pipeline] } 00:01:25.270 [Pipeline] // retry 00:01:25.275 [Pipeline] } 00:01:25.296 [Pipeline] // withCredentials 00:01:25.306 [Pipeline] httpRequest 00:01:25.324 [Pipeline] echo 00:01:25.326 Sorcerer 10.211.164.101 is alive 00:01:25.335 [Pipeline] httpRequest 00:01:25.340 HttpMethod: GET 00:01:25.340 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:25.341 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:25.342 Response Code: HTTP/1.1 200 OK 00:01:25.343 Success: Status code 200 is in the accepted range: 200,404 00:01:25.343 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:32.164 [Pipeline] sh 00:01:32.474 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:33.867 [Pipeline] sh 00:01:34.150 + git -C dpdk log --oneline -n5 00:01:34.150 caf0f5d395 version: 22.11.4 00:01:34.150 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:34.150 dc9c799c7d vhost: fix missing spinlock unlock 00:01:34.150 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:34.150 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:34.168 [Pipeline] writeFile 00:01:34.183 [Pipeline] sh 00:01:34.463 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:34.475 [Pipeline] sh 00:01:34.754 + cat autorun-spdk.conf 00:01:34.755 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.755 SPDK_TEST_ISCSI_INITIATOR=1 00:01:34.755 SPDK_TEST_ISCSI=1 00:01:34.755 SPDK_TEST_RBD=1 00:01:34.755 SPDK_RUN_UBSAN=1 00:01:34.755 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:34.755 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:34.755 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:34.762 RUN_NIGHTLY=1 00:01:34.763 [Pipeline] } 00:01:34.779 [Pipeline] // stage 00:01:34.802 [Pipeline] stage 00:01:34.804 [Pipeline] { (Run VM) 00:01:34.818 [Pipeline] sh 00:01:35.099 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:35.099 + echo 'Start stage prepare_nvme.sh' 00:01:35.099 Start stage prepare_nvme.sh 00:01:35.099 + [[ -n 5 ]] 00:01:35.099 + disk_prefix=ex5 00:01:35.099 + [[ -n /var/jenkins/workspace/iscsi-vg-autotest ]] 00:01:35.099 + [[ -e /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf ]] 00:01:35.099 + source /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf 00:01:35.099 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.099 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:01:35.099 ++ SPDK_TEST_ISCSI=1 00:01:35.099 ++ SPDK_TEST_RBD=1 00:01:35.099 ++ SPDK_RUN_UBSAN=1 00:01:35.099 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:35.099 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:35.099 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.099 ++ RUN_NIGHTLY=1 00:01:35.099 + cd /var/jenkins/workspace/iscsi-vg-autotest 00:01:35.099 + nvme_files=() 00:01:35.099 + declare -A nvme_files 00:01:35.099 + backend_dir=/var/lib/libvirt/images/backends 00:01:35.099 + nvme_files['nvme.img']=5G 00:01:35.099 + nvme_files['nvme-cmb.img']=5G 00:01:35.099 + nvme_files['nvme-multi0.img']=4G 00:01:35.099 + nvme_files['nvme-multi1.img']=4G 00:01:35.099 + nvme_files['nvme-multi2.img']=4G 00:01:35.099 + nvme_files['nvme-openstack.img']=8G 00:01:35.099 + nvme_files['nvme-zns.img']=5G 00:01:35.099 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:35.099 + (( SPDK_TEST_FTL == 1 )) 00:01:35.099 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:35.099 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:35.099 + for nvme in "${!nvme_files[@]}" 00:01:35.099 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:35.099 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.099 + for nvme in "${!nvme_files[@]}" 00:01:35.099 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:35.099 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.099 + for nvme in "${!nvme_files[@]}" 00:01:35.099 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:35.099 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:35.099 + for nvme in "${!nvme_files[@]}" 00:01:35.099 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:35.099 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.099 + for nvme in "${!nvme_files[@]}" 00:01:35.099 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:35.099 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.099 + for nvme in "${!nvme_files[@]}" 00:01:35.099 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:35.099 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.099 + for nvme in "${!nvme_files[@]}" 00:01:35.099 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:36.035 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.035 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:36.035 + echo 'End stage prepare_nvme.sh' 00:01:36.035 End stage prepare_nvme.sh 00:01:36.046 [Pipeline] sh 00:01:36.330 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:36.330 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:01:36.330 00:01:36.330 DIR=/var/jenkins/workspace/iscsi-vg-autotest/spdk/scripts/vagrant 00:01:36.330 SPDK_DIR=/var/jenkins/workspace/iscsi-vg-autotest/spdk 00:01:36.330 VAGRANT_TARGET=/var/jenkins/workspace/iscsi-vg-autotest 00:01:36.330 HELP=0 00:01:36.330 DRY_RUN=0 00:01:36.330 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:36.330 NVME_DISKS_TYPE=nvme,nvme, 00:01:36.330 NVME_AUTO_CREATE=0 00:01:36.330 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:36.330 NVME_CMB=,, 00:01:36.330 NVME_PMR=,, 00:01:36.330 NVME_ZNS=,, 00:01:36.330 NVME_MS=,, 00:01:36.330 NVME_FDP=,, 00:01:36.330 SPDK_VAGRANT_DISTRO=fedora38 00:01:36.330 SPDK_VAGRANT_VMCPU=10 00:01:36.330 SPDK_VAGRANT_VMRAM=12288 00:01:36.330 SPDK_VAGRANT_PROVIDER=libvirt 00:01:36.330 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:36.330 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:36.330 SPDK_OPENSTACK_NETWORK=0 00:01:36.330 VAGRANT_PACKAGE_BOX=0 00:01:36.330 VAGRANTFILE=/var/jenkins/workspace/iscsi-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:36.330 FORCE_DISTRO=true 00:01:36.330 VAGRANT_BOX_VERSION= 00:01:36.330 EXTRA_VAGRANTFILES= 00:01:36.330 NIC_MODEL=e1000 00:01:36.330 00:01:36.330 mkdir: created directory '/var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt' 00:01:36.330 /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt /var/jenkins/workspace/iscsi-vg-autotest 00:01:38.863 Bringing machine 'default' up with 'libvirt' provider... 00:01:39.492 ==> default: Creating image (snapshot of base box volume). 00:01:39.750 ==> default: Creating domain with the following settings... 00:01:39.750 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721710359_dfadec26cf5f4f6c0113 00:01:39.750 ==> default: -- Domain type: kvm 00:01:39.750 ==> default: -- Cpus: 10 00:01:39.750 ==> default: -- Feature: acpi 00:01:39.750 ==> default: -- Feature: apic 00:01:39.750 ==> default: -- Feature: pae 00:01:39.750 ==> default: -- Memory: 12288M 00:01:39.750 ==> default: -- Memory Backing: hugepages: 00:01:39.750 ==> default: -- Management MAC: 00:01:39.750 ==> default: -- Loader: 00:01:39.750 ==> default: -- Nvram: 00:01:39.750 ==> default: -- Base box: spdk/fedora38 00:01:39.750 ==> default: -- Storage pool: default 00:01:39.750 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721710359_dfadec26cf5f4f6c0113.img (20G) 00:01:39.750 ==> default: -- Volume Cache: default 00:01:39.750 ==> default: -- Kernel: 00:01:39.750 ==> default: -- Initrd: 00:01:39.750 ==> default: -- Graphics Type: vnc 00:01:39.750 ==> default: -- Graphics Port: -1 00:01:39.750 ==> default: -- Graphics IP: 127.0.0.1 00:01:39.750 ==> default: -- Graphics Password: Not defined 00:01:39.750 ==> default: -- Video Type: cirrus 00:01:39.750 ==> default: -- Video VRAM: 9216 00:01:39.750 ==> default: -- Sound Type: 00:01:39.750 ==> default: -- Keymap: en-us 00:01:39.750 ==> default: -- TPM Path: 00:01:39.750 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:39.750 ==> default: -- Command line args: 00:01:39.750 ==> default: -> value=-device, 00:01:39.750 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:39.750 ==> default: -> value=-drive, 00:01:39.750 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:39.750 ==> default: -> value=-device, 00:01:39.750 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.750 ==> default: -> value=-device, 00:01:39.750 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:39.750 ==> default: -> value=-drive, 00:01:39.750 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:39.750 ==> default: -> value=-device, 00:01:39.750 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.750 ==> default: -> value=-drive, 00:01:39.750 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:39.750 ==> default: -> value=-device, 00:01:39.750 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.750 ==> default: -> value=-drive, 00:01:39.750 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:39.750 ==> default: -> value=-device, 00:01:39.750 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.010 ==> default: Creating shared folders metadata... 00:01:40.010 ==> default: Starting domain. 00:01:41.914 ==> default: Waiting for domain to get an IP address... 00:01:56.803 ==> default: Waiting for SSH to become available... 00:01:58.179 ==> default: Configuring and enabling network interfaces... 00:02:02.370 default: SSH address: 192.168.121.159:22 00:02:02.370 default: SSH username: vagrant 00:02:02.370 default: SSH auth method: private key 00:02:04.905 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:11.469 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:18.035 ==> default: Mounting SSHFS shared folder... 00:02:18.603 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:18.603 ==> default: Checking Mount.. 00:02:19.981 ==> default: Folder Successfully Mounted! 00:02:19.981 ==> default: Running provisioner: file... 00:02:20.921 default: ~/.gitconfig => .gitconfig 00:02:21.181 00:02:21.181 SUCCESS! 00:02:21.181 00:02:21.181 cd to /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:21.181 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:21.181 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:21.181 00:02:21.190 [Pipeline] } 00:02:21.208 [Pipeline] // stage 00:02:21.217 [Pipeline] dir 00:02:21.218 Running in /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt 00:02:21.220 [Pipeline] { 00:02:21.234 [Pipeline] catchError 00:02:21.236 [Pipeline] { 00:02:21.249 [Pipeline] sh 00:02:21.529 + vagrant ssh-config --host vagrant 00:02:21.529 + sed -ne /^Host/,$p 00:02:21.529 + tee ssh_conf 00:02:24.817 Host vagrant 00:02:24.817 HostName 192.168.121.159 00:02:24.817 User vagrant 00:02:24.817 Port 22 00:02:24.817 UserKnownHostsFile /dev/null 00:02:24.817 StrictHostKeyChecking no 00:02:24.817 PasswordAuthentication no 00:02:24.817 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:24.817 IdentitiesOnly yes 00:02:24.817 LogLevel FATAL 00:02:24.817 ForwardAgent yes 00:02:24.817 ForwardX11 yes 00:02:24.817 00:02:24.831 [Pipeline] withEnv 00:02:24.834 [Pipeline] { 00:02:24.850 [Pipeline] sh 00:02:25.165 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:25.165 source /etc/os-release 00:02:25.165 [[ -e /image.version ]] && img=$(< /image.version) 00:02:25.165 # Minimal, systemd-like check. 00:02:25.165 if [[ -e /.dockerenv ]]; then 00:02:25.165 # Clear garbage from the node's name: 00:02:25.165 # agt-er_autotest_547-896 -> autotest_547-896 00:02:25.165 # $HOSTNAME is the actual container id 00:02:25.165 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:25.165 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:25.165 # We can assume this is a mount from a host where container is running, 00:02:25.165 # so fetch its hostname to easily identify the target swarm worker. 00:02:25.165 container="$(< /etc/hostname) ($agent)" 00:02:25.165 else 00:02:25.165 # Fallback 00:02:25.165 container=$agent 00:02:25.165 fi 00:02:25.165 fi 00:02:25.165 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:25.165 00:02:25.176 [Pipeline] } 00:02:25.200 [Pipeline] // withEnv 00:02:25.212 [Pipeline] setCustomBuildProperty 00:02:25.227 [Pipeline] stage 00:02:25.230 [Pipeline] { (Tests) 00:02:25.249 [Pipeline] sh 00:02:25.530 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:25.801 [Pipeline] sh 00:02:26.079 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:26.353 [Pipeline] timeout 00:02:26.354 Timeout set to expire in 45 min 00:02:26.356 [Pipeline] { 00:02:26.370 [Pipeline] sh 00:02:26.646 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:27.213 HEAD is now at f7b31b2b9 log: declare g_deprecation_epoch static 00:02:27.226 [Pipeline] sh 00:02:27.504 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:27.776 [Pipeline] sh 00:02:28.055 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:28.070 [Pipeline] sh 00:02:28.376 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=iscsi-vg-autotest ./autoruner.sh spdk_repo 00:02:28.376 ++ readlink -f spdk_repo 00:02:28.376 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:28.376 + [[ -n /home/vagrant/spdk_repo ]] 00:02:28.376 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:28.376 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:28.376 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:28.376 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:28.376 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:28.376 + [[ iscsi-vg-autotest == pkgdep-* ]] 00:02:28.376 + cd /home/vagrant/spdk_repo 00:02:28.376 + source /etc/os-release 00:02:28.376 ++ NAME='Fedora Linux' 00:02:28.376 ++ VERSION='38 (Cloud Edition)' 00:02:28.376 ++ ID=fedora 00:02:28.376 ++ VERSION_ID=38 00:02:28.376 ++ VERSION_CODENAME= 00:02:28.376 ++ PLATFORM_ID=platform:f38 00:02:28.376 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:28.376 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:28.376 ++ LOGO=fedora-logo-icon 00:02:28.376 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:28.376 ++ HOME_URL=https://fedoraproject.org/ 00:02:28.376 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:28.376 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:28.376 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:28.376 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:28.376 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:28.376 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:28.376 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:28.376 ++ SUPPORT_END=2024-05-14 00:02:28.376 ++ VARIANT='Cloud Edition' 00:02:28.376 ++ VARIANT_ID=cloud 00:02:28.376 + uname -a 00:02:28.376 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:28.376 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:28.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:28.943 Hugepages 00:02:28.943 node hugesize free / total 00:02:28.943 node0 1048576kB 0 / 0 00:02:28.943 node0 2048kB 0 / 0 00:02:28.943 00:02:28.943 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:28.943 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:28.943 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:28.943 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:28.943 + rm -f /tmp/spdk-ld-path 00:02:28.943 + source autorun-spdk.conf 00:02:28.943 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.943 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:02:28.943 ++ SPDK_TEST_ISCSI=1 00:02:28.943 ++ SPDK_TEST_RBD=1 00:02:28.943 ++ SPDK_RUN_UBSAN=1 00:02:28.943 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:28.943 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:28.943 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:28.943 ++ RUN_NIGHTLY=1 00:02:28.943 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:28.943 + [[ -n '' ]] 00:02:28.943 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:29.202 + for M in /var/spdk/build-*-manifest.txt 00:02:29.202 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:29.202 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.202 + for M in /var/spdk/build-*-manifest.txt 00:02:29.202 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:29.202 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.202 ++ uname 00:02:29.202 + [[ Linux == \L\i\n\u\x ]] 00:02:29.202 + sudo dmesg -T 00:02:29.202 + sudo dmesg --clear 00:02:29.202 + dmesg_pid=5852 00:02:29.202 + sudo dmesg -Tw 00:02:29.202 + [[ Fedora Linux == FreeBSD ]] 00:02:29.202 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.202 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.202 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:29.202 + [[ -x /usr/src/fio-static/fio ]] 00:02:29.202 + export FIO_BIN=/usr/src/fio-static/fio 00:02:29.202 + FIO_BIN=/usr/src/fio-static/fio 00:02:29.202 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:29.202 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:29.202 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:29.202 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.202 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.202 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:29.202 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.202 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.202 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.202 Test configuration: 00:02:29.202 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.202 SPDK_TEST_ISCSI_INITIATOR=1 00:02:29.202 SPDK_TEST_ISCSI=1 00:02:29.202 SPDK_TEST_RBD=1 00:02:29.202 SPDK_RUN_UBSAN=1 00:02:29.202 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:29.202 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:29.202 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:29.202 RUN_NIGHTLY=1 04:53:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:29.202 04:53:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:29.202 04:53:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.202 04:53:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.202 04:53:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.202 04:53:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.202 04:53:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.202 04:53:29 -- paths/export.sh@5 -- $ export PATH 00:02:29.202 04:53:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.202 04:53:29 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:29.202 04:53:29 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:29.202 04:53:29 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721710409.XXXXXX 00:02:29.203 04:53:29 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721710409.DVxW93 00:02:29.203 04:53:29 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:29.203 04:53:29 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:02:29.203 04:53:29 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:29.203 04:53:29 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:29.203 04:53:29 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:29.203 04:53:29 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:29.203 04:53:29 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:29.203 04:53:29 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:29.203 04:53:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.203 04:53:29 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:29.203 04:53:29 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:29.203 04:53:29 -- pm/common@17 -- $ local monitor 00:02:29.203 04:53:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.203 04:53:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.203 04:53:29 -- pm/common@25 -- $ sleep 1 00:02:29.203 04:53:29 -- pm/common@21 -- $ date +%s 00:02:29.203 04:53:29 -- pm/common@21 -- $ date +%s 00:02:29.203 04:53:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721710409 00:02:29.203 04:53:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721710409 00:02:29.460 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721710409_collect-vmstat.pm.log 00:02:29.460 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721710409_collect-cpu-load.pm.log 00:02:30.395 04:53:30 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:30.395 04:53:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:30.395 04:53:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:30.395 04:53:30 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:30.395 04:53:30 -- spdk/autobuild.sh@16 -- $ date -u 00:02:30.395 Tue Jul 23 04:53:30 AM UTC 2024 00:02:30.395 04:53:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:30.395 v24.09-pre-297-gf7b31b2b9 00:02:30.395 04:53:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:30.396 04:53:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:30.396 04:53:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:30.396 04:53:30 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:30.396 04:53:30 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:30.396 04:53:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.396 ************************************ 00:02:30.396 START TEST ubsan 00:02:30.396 ************************************ 00:02:30.396 using ubsan 00:02:30.396 04:53:30 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:30.396 00:02:30.396 real 0m0.000s 00:02:30.396 user 0m0.000s 00:02:30.396 sys 0m0.000s 00:02:30.396 04:53:30 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:30.396 ************************************ 00:02:30.396 END TEST ubsan 00:02:30.396 04:53:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:30.396 ************************************ 00:02:30.396 04:53:30 -- common/autotest_common.sh@1142 -- $ return 0 00:02:30.396 04:53:30 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:30.396 04:53:30 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:30.396 04:53:30 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:30.396 04:53:30 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:30.396 04:53:30 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:30.396 04:53:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.396 ************************************ 00:02:30.396 START TEST build_native_dpdk 00:02:30.396 ************************************ 00:02:30.396 04:53:30 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:30.396 caf0f5d395 version: 22.11.4 00:02:30.396 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:30.396 dc9c799c7d vhost: fix missing spinlock unlock 00:02:30.396 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:30.396 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:30.396 patching file config/rte_config.h 00:02:30.396 Hunk #1 succeeded at 60 (offset 1 line). 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:30.396 04:53:30 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:30.396 patching file lib/pcapng/rte_pcapng.c 00:02:30.396 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:30.396 04:53:30 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:35.667 The Meson build system 00:02:35.667 Version: 1.3.1 00:02:35.667 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:35.667 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:35.667 Build type: native build 00:02:35.667 Program cat found: YES (/usr/bin/cat) 00:02:35.667 Project name: DPDK 00:02:35.667 Project version: 22.11.4 00:02:35.667 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:35.667 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:35.667 Host machine cpu family: x86_64 00:02:35.667 Host machine cpu: x86_64 00:02:35.667 Message: ## Building in Developer Mode ## 00:02:35.667 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:35.667 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:35.667 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:35.667 Program objdump found: YES (/usr/bin/objdump) 00:02:35.667 Program python3 found: YES (/usr/bin/python3) 00:02:35.667 Program cat found: YES (/usr/bin/cat) 00:02:35.667 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:35.667 Checking for size of "void *" : 8 00:02:35.667 Checking for size of "void *" : 8 (cached) 00:02:35.667 Library m found: YES 00:02:35.667 Library numa found: YES 00:02:35.667 Has header "numaif.h" : YES 00:02:35.667 Library fdt found: NO 00:02:35.667 Library execinfo found: NO 00:02:35.667 Has header "execinfo.h" : YES 00:02:35.667 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:35.667 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:35.667 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:35.667 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:35.667 Run-time dependency openssl found: YES 3.0.9 00:02:35.667 Run-time dependency libpcap found: YES 1.10.4 00:02:35.667 Has header "pcap.h" with dependency libpcap: YES 00:02:35.667 Compiler for C supports arguments -Wcast-qual: YES 00:02:35.667 Compiler for C supports arguments -Wdeprecated: YES 00:02:35.667 Compiler for C supports arguments -Wformat: YES 00:02:35.667 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:35.667 Compiler for C supports arguments -Wformat-security: NO 00:02:35.667 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:35.667 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:35.667 Compiler for C supports arguments -Wnested-externs: YES 00:02:35.667 Compiler for C supports arguments -Wold-style-definition: YES 00:02:35.667 Compiler for C supports arguments -Wpointer-arith: YES 00:02:35.667 Compiler for C supports arguments -Wsign-compare: YES 00:02:35.667 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:35.667 Compiler for C supports arguments -Wundef: YES 00:02:35.667 Compiler for C supports arguments -Wwrite-strings: YES 00:02:35.667 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:35.667 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:35.667 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:35.667 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:35.667 Compiler for C supports arguments -mavx512f: YES 00:02:35.667 Checking if "AVX512 checking" compiles: YES 00:02:35.667 Fetching value of define "__SSE4_2__" : 1 00:02:35.667 Fetching value of define "__AES__" : 1 00:02:35.667 Fetching value of define "__AVX__" : 1 00:02:35.667 Fetching value of define "__AVX2__" : 1 00:02:35.667 Fetching value of define "__AVX512BW__" : (undefined) 00:02:35.667 Fetching value of define "__AVX512CD__" : (undefined) 00:02:35.667 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:35.667 Fetching value of define "__AVX512F__" : (undefined) 00:02:35.667 Fetching value of define "__AVX512VL__" : (undefined) 00:02:35.667 Fetching value of define "__PCLMUL__" : 1 00:02:35.667 Fetching value of define "__RDRND__" : 1 00:02:35.667 Fetching value of define "__RDSEED__" : 1 00:02:35.667 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:35.667 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:35.667 Message: lib/kvargs: Defining dependency "kvargs" 00:02:35.667 Message: lib/telemetry: Defining dependency "telemetry" 00:02:35.667 Checking for function "getentropy" : YES 00:02:35.667 Message: lib/eal: Defining dependency "eal" 00:02:35.667 Message: lib/ring: Defining dependency "ring" 00:02:35.667 Message: lib/rcu: Defining dependency "rcu" 00:02:35.667 Message: lib/mempool: Defining dependency "mempool" 00:02:35.667 Message: lib/mbuf: Defining dependency "mbuf" 00:02:35.667 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:35.667 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:35.667 Compiler for C supports arguments -mpclmul: YES 00:02:35.667 Compiler for C supports arguments -maes: YES 00:02:35.667 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:35.667 Compiler for C supports arguments -mavx512bw: YES 00:02:35.667 Compiler for C supports arguments -mavx512dq: YES 00:02:35.667 Compiler for C supports arguments -mavx512vl: YES 00:02:35.667 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:35.667 Compiler for C supports arguments -mavx2: YES 00:02:35.667 Compiler for C supports arguments -mavx: YES 00:02:35.667 Message: lib/net: Defining dependency "net" 00:02:35.667 Message: lib/meter: Defining dependency "meter" 00:02:35.667 Message: lib/ethdev: Defining dependency "ethdev" 00:02:35.667 Message: lib/pci: Defining dependency "pci" 00:02:35.667 Message: lib/cmdline: Defining dependency "cmdline" 00:02:35.667 Message: lib/metrics: Defining dependency "metrics" 00:02:35.667 Message: lib/hash: Defining dependency "hash" 00:02:35.667 Message: lib/timer: Defining dependency "timer" 00:02:35.667 Fetching value of define "__AVX2__" : 1 (cached) 00:02:35.667 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:35.667 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:35.667 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:35.667 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:35.667 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:35.667 Message: lib/acl: Defining dependency "acl" 00:02:35.667 Message: lib/bbdev: Defining dependency "bbdev" 00:02:35.667 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:35.667 Run-time dependency libelf found: YES 0.190 00:02:35.667 Message: lib/bpf: Defining dependency "bpf" 00:02:35.667 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:35.667 Message: lib/compressdev: Defining dependency "compressdev" 00:02:35.667 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:35.667 Message: lib/distributor: Defining dependency "distributor" 00:02:35.667 Message: lib/efd: Defining dependency "efd" 00:02:35.667 Message: lib/eventdev: Defining dependency "eventdev" 00:02:35.667 Message: lib/gpudev: Defining dependency "gpudev" 00:02:35.667 Message: lib/gro: Defining dependency "gro" 00:02:35.667 Message: lib/gso: Defining dependency "gso" 00:02:35.667 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:35.667 Message: lib/jobstats: Defining dependency "jobstats" 00:02:35.667 Message: lib/latencystats: Defining dependency "latencystats" 00:02:35.667 Message: lib/lpm: Defining dependency "lpm" 00:02:35.667 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:35.667 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:35.667 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:35.667 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:35.667 Message: lib/member: Defining dependency "member" 00:02:35.667 Message: lib/pcapng: Defining dependency "pcapng" 00:02:35.668 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:35.668 Message: lib/power: Defining dependency "power" 00:02:35.668 Message: lib/rawdev: Defining dependency "rawdev" 00:02:35.668 Message: lib/regexdev: Defining dependency "regexdev" 00:02:35.668 Message: lib/dmadev: Defining dependency "dmadev" 00:02:35.668 Message: lib/rib: Defining dependency "rib" 00:02:35.668 Message: lib/reorder: Defining dependency "reorder" 00:02:35.668 Message: lib/sched: Defining dependency "sched" 00:02:35.668 Message: lib/security: Defining dependency "security" 00:02:35.668 Message: lib/stack: Defining dependency "stack" 00:02:35.668 Has header "linux/userfaultfd.h" : YES 00:02:35.668 Message: lib/vhost: Defining dependency "vhost" 00:02:35.668 Message: lib/ipsec: Defining dependency "ipsec" 00:02:35.668 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:35.668 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:35.668 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:35.668 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:35.668 Message: lib/fib: Defining dependency "fib" 00:02:35.668 Message: lib/port: Defining dependency "port" 00:02:35.668 Message: lib/pdump: Defining dependency "pdump" 00:02:35.668 Message: lib/table: Defining dependency "table" 00:02:35.668 Message: lib/pipeline: Defining dependency "pipeline" 00:02:35.668 Message: lib/graph: Defining dependency "graph" 00:02:35.668 Message: lib/node: Defining dependency "node" 00:02:35.668 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:35.668 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:35.668 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:35.668 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:35.668 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:35.668 Compiler for C supports arguments -Wno-unused-value: YES 00:02:35.668 Compiler for C supports arguments -Wno-format: YES 00:02:35.668 Compiler for C supports arguments -Wno-format-security: YES 00:02:35.668 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:36.603 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:36.604 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:36.604 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:36.604 Fetching value of define "__AVX2__" : 1 (cached) 00:02:36.604 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:36.604 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:36.604 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:36.604 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:36.604 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:36.604 Program doxygen found: YES (/usr/bin/doxygen) 00:02:36.604 Configuring doxy-api.conf using configuration 00:02:36.604 Program sphinx-build found: NO 00:02:36.604 Configuring rte_build_config.h using configuration 00:02:36.604 Message: 00:02:36.604 ================= 00:02:36.604 Applications Enabled 00:02:36.604 ================= 00:02:36.604 00:02:36.604 apps: 00:02:36.604 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:36.604 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:36.604 test-security-perf, 00:02:36.604 00:02:36.604 Message: 00:02:36.604 ================= 00:02:36.604 Libraries Enabled 00:02:36.604 ================= 00:02:36.604 00:02:36.604 libs: 00:02:36.604 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:36.604 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:36.604 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:36.604 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:36.604 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:36.604 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:36.604 table, pipeline, graph, node, 00:02:36.604 00:02:36.604 Message: 00:02:36.604 =============== 00:02:36.604 Drivers Enabled 00:02:36.604 =============== 00:02:36.604 00:02:36.604 common: 00:02:36.604 00:02:36.604 bus: 00:02:36.604 pci, vdev, 00:02:36.604 mempool: 00:02:36.604 ring, 00:02:36.604 dma: 00:02:36.604 00:02:36.604 net: 00:02:36.604 i40e, 00:02:36.604 raw: 00:02:36.604 00:02:36.604 crypto: 00:02:36.604 00:02:36.604 compress: 00:02:36.604 00:02:36.604 regex: 00:02:36.604 00:02:36.604 vdpa: 00:02:36.604 00:02:36.604 event: 00:02:36.604 00:02:36.604 baseband: 00:02:36.604 00:02:36.604 gpu: 00:02:36.604 00:02:36.604 00:02:36.604 Message: 00:02:36.604 ================= 00:02:36.604 Content Skipped 00:02:36.604 ================= 00:02:36.604 00:02:36.604 apps: 00:02:36.604 00:02:36.604 libs: 00:02:36.604 kni: explicitly disabled via build config (deprecated lib) 00:02:36.604 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:36.604 00:02:36.604 drivers: 00:02:36.604 common/cpt: not in enabled drivers build config 00:02:36.604 common/dpaax: not in enabled drivers build config 00:02:36.604 common/iavf: not in enabled drivers build config 00:02:36.604 common/idpf: not in enabled drivers build config 00:02:36.604 common/mvep: not in enabled drivers build config 00:02:36.604 common/octeontx: not in enabled drivers build config 00:02:36.604 bus/auxiliary: not in enabled drivers build config 00:02:36.604 bus/dpaa: not in enabled drivers build config 00:02:36.604 bus/fslmc: not in enabled drivers build config 00:02:36.604 bus/ifpga: not in enabled drivers build config 00:02:36.604 bus/vmbus: not in enabled drivers build config 00:02:36.604 common/cnxk: not in enabled drivers build config 00:02:36.604 common/mlx5: not in enabled drivers build config 00:02:36.604 common/qat: not in enabled drivers build config 00:02:36.604 common/sfc_efx: not in enabled drivers build config 00:02:36.604 mempool/bucket: not in enabled drivers build config 00:02:36.604 mempool/cnxk: not in enabled drivers build config 00:02:36.604 mempool/dpaa: not in enabled drivers build config 00:02:36.604 mempool/dpaa2: not in enabled drivers build config 00:02:36.604 mempool/octeontx: not in enabled drivers build config 00:02:36.604 mempool/stack: not in enabled drivers build config 00:02:36.604 dma/cnxk: not in enabled drivers build config 00:02:36.604 dma/dpaa: not in enabled drivers build config 00:02:36.604 dma/dpaa2: not in enabled drivers build config 00:02:36.604 dma/hisilicon: not in enabled drivers build config 00:02:36.604 dma/idxd: not in enabled drivers build config 00:02:36.604 dma/ioat: not in enabled drivers build config 00:02:36.604 dma/skeleton: not in enabled drivers build config 00:02:36.604 net/af_packet: not in enabled drivers build config 00:02:36.604 net/af_xdp: not in enabled drivers build config 00:02:36.604 net/ark: not in enabled drivers build config 00:02:36.604 net/atlantic: not in enabled drivers build config 00:02:36.604 net/avp: not in enabled drivers build config 00:02:36.604 net/axgbe: not in enabled drivers build config 00:02:36.604 net/bnx2x: not in enabled drivers build config 00:02:36.604 net/bnxt: not in enabled drivers build config 00:02:36.604 net/bonding: not in enabled drivers build config 00:02:36.604 net/cnxk: not in enabled drivers build config 00:02:36.604 net/cxgbe: not in enabled drivers build config 00:02:36.604 net/dpaa: not in enabled drivers build config 00:02:36.604 net/dpaa2: not in enabled drivers build config 00:02:36.604 net/e1000: not in enabled drivers build config 00:02:36.604 net/ena: not in enabled drivers build config 00:02:36.604 net/enetc: not in enabled drivers build config 00:02:36.604 net/enetfec: not in enabled drivers build config 00:02:36.604 net/enic: not in enabled drivers build config 00:02:36.604 net/failsafe: not in enabled drivers build config 00:02:36.604 net/fm10k: not in enabled drivers build config 00:02:36.604 net/gve: not in enabled drivers build config 00:02:36.604 net/hinic: not in enabled drivers build config 00:02:36.604 net/hns3: not in enabled drivers build config 00:02:36.604 net/iavf: not in enabled drivers build config 00:02:36.604 net/ice: not in enabled drivers build config 00:02:36.604 net/idpf: not in enabled drivers build config 00:02:36.604 net/igc: not in enabled drivers build config 00:02:36.604 net/ionic: not in enabled drivers build config 00:02:36.604 net/ipn3ke: not in enabled drivers build config 00:02:36.604 net/ixgbe: not in enabled drivers build config 00:02:36.604 net/kni: not in enabled drivers build config 00:02:36.604 net/liquidio: not in enabled drivers build config 00:02:36.604 net/mana: not in enabled drivers build config 00:02:36.604 net/memif: not in enabled drivers build config 00:02:36.604 net/mlx4: not in enabled drivers build config 00:02:36.604 net/mlx5: not in enabled drivers build config 00:02:36.604 net/mvneta: not in enabled drivers build config 00:02:36.604 net/mvpp2: not in enabled drivers build config 00:02:36.604 net/netvsc: not in enabled drivers build config 00:02:36.604 net/nfb: not in enabled drivers build config 00:02:36.604 net/nfp: not in enabled drivers build config 00:02:36.604 net/ngbe: not in enabled drivers build config 00:02:36.604 net/null: not in enabled drivers build config 00:02:36.604 net/octeontx: not in enabled drivers build config 00:02:36.604 net/octeon_ep: not in enabled drivers build config 00:02:36.604 net/pcap: not in enabled drivers build config 00:02:36.604 net/pfe: not in enabled drivers build config 00:02:36.604 net/qede: not in enabled drivers build config 00:02:36.604 net/ring: not in enabled drivers build config 00:02:36.604 net/sfc: not in enabled drivers build config 00:02:36.604 net/softnic: not in enabled drivers build config 00:02:36.604 net/tap: not in enabled drivers build config 00:02:36.604 net/thunderx: not in enabled drivers build config 00:02:36.604 net/txgbe: not in enabled drivers build config 00:02:36.604 net/vdev_netvsc: not in enabled drivers build config 00:02:36.604 net/vhost: not in enabled drivers build config 00:02:36.604 net/virtio: not in enabled drivers build config 00:02:36.604 net/vmxnet3: not in enabled drivers build config 00:02:36.604 raw/cnxk_bphy: not in enabled drivers build config 00:02:36.604 raw/cnxk_gpio: not in enabled drivers build config 00:02:36.604 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:36.604 raw/ifpga: not in enabled drivers build config 00:02:36.604 raw/ntb: not in enabled drivers build config 00:02:36.604 raw/skeleton: not in enabled drivers build config 00:02:36.604 crypto/armv8: not in enabled drivers build config 00:02:36.604 crypto/bcmfs: not in enabled drivers build config 00:02:36.604 crypto/caam_jr: not in enabled drivers build config 00:02:36.604 crypto/ccp: not in enabled drivers build config 00:02:36.604 crypto/cnxk: not in enabled drivers build config 00:02:36.604 crypto/dpaa_sec: not in enabled drivers build config 00:02:36.604 crypto/dpaa2_sec: not in enabled drivers build config 00:02:36.604 crypto/ipsec_mb: not in enabled drivers build config 00:02:36.605 crypto/mlx5: not in enabled drivers build config 00:02:36.605 crypto/mvsam: not in enabled drivers build config 00:02:36.605 crypto/nitrox: not in enabled drivers build config 00:02:36.605 crypto/null: not in enabled drivers build config 00:02:36.605 crypto/octeontx: not in enabled drivers build config 00:02:36.605 crypto/openssl: not in enabled drivers build config 00:02:36.605 crypto/scheduler: not in enabled drivers build config 00:02:36.605 crypto/uadk: not in enabled drivers build config 00:02:36.605 crypto/virtio: not in enabled drivers build config 00:02:36.605 compress/isal: not in enabled drivers build config 00:02:36.605 compress/mlx5: not in enabled drivers build config 00:02:36.605 compress/octeontx: not in enabled drivers build config 00:02:36.605 compress/zlib: not in enabled drivers build config 00:02:36.605 regex/mlx5: not in enabled drivers build config 00:02:36.605 regex/cn9k: not in enabled drivers build config 00:02:36.605 vdpa/ifc: not in enabled drivers build config 00:02:36.605 vdpa/mlx5: not in enabled drivers build config 00:02:36.605 vdpa/sfc: not in enabled drivers build config 00:02:36.605 event/cnxk: not in enabled drivers build config 00:02:36.605 event/dlb2: not in enabled drivers build config 00:02:36.605 event/dpaa: not in enabled drivers build config 00:02:36.605 event/dpaa2: not in enabled drivers build config 00:02:36.605 event/dsw: not in enabled drivers build config 00:02:36.605 event/opdl: not in enabled drivers build config 00:02:36.605 event/skeleton: not in enabled drivers build config 00:02:36.605 event/sw: not in enabled drivers build config 00:02:36.605 event/octeontx: not in enabled drivers build config 00:02:36.605 baseband/acc: not in enabled drivers build config 00:02:36.605 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:36.605 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:36.605 baseband/la12xx: not in enabled drivers build config 00:02:36.605 baseband/null: not in enabled drivers build config 00:02:36.605 baseband/turbo_sw: not in enabled drivers build config 00:02:36.605 gpu/cuda: not in enabled drivers build config 00:02:36.605 00:02:36.605 00:02:36.605 Build targets in project: 314 00:02:36.605 00:02:36.605 DPDK 22.11.4 00:02:36.605 00:02:36.605 User defined options 00:02:36.605 libdir : lib 00:02:36.605 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:36.605 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:36.605 c_link_args : 00:02:36.605 enable_docs : false 00:02:36.605 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:36.605 enable_kmods : false 00:02:36.605 machine : native 00:02:36.605 tests : false 00:02:36.605 00:02:36.605 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.605 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:36.863 04:53:36 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:36.863 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:36.863 [1/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:36.863 [2/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:36.863 [3/743] Generating lib/rte_telemetry_def with a custom command 00:02:36.863 [4/743] Generating lib/rte_kvargs_def with a custom command 00:02:36.863 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:36.863 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:37.123 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:37.123 [8/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:37.123 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:37.123 [10/743] Linking static target lib/librte_kvargs.a 00:02:37.123 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:37.123 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:37.123 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:37.123 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:37.123 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:37.123 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:37.123 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:37.123 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:37.383 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:37.383 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.383 [21/743] Linking target lib/librte_kvargs.so.23.0 00:02:37.383 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:37.383 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:37.383 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:37.383 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:37.383 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:37.383 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:37.383 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:37.383 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:37.383 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:37.383 [31/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:37.641 [32/743] Linking static target lib/librte_telemetry.a 00:02:37.641 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:37.641 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:37.641 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:37.641 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:37.641 [37/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:37.641 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:37.641 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:37.641 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:37.641 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:37.899 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:37.899 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:37.899 [44/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.899 [45/743] Linking target lib/librte_telemetry.so.23.0 00:02:37.899 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:37.899 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:37.899 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:37.899 [49/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:38.158 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:38.158 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:38.158 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:38.158 [53/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.158 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:38.158 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:38.158 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:38.158 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:38.158 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:38.158 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:38.158 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:38.158 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:38.158 [62/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:38.158 [63/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:38.158 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:38.158 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:38.158 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:38.416 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:38.416 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:38.416 [69/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:38.416 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:38.416 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:38.416 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:38.416 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:38.416 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:38.416 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:38.416 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.416 [77/743] Generating lib/rte_eal_def with a custom command 00:02:38.416 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:38.416 [79/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:38.416 [80/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.416 [81/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.416 [82/743] Generating lib/rte_ring_def with a custom command 00:02:38.416 [83/743] Generating lib/rte_ring_mingw with a custom command 00:02:38.674 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:38.675 [85/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.675 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:02:38.675 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.675 [88/743] Linking static target lib/librte_ring.a 00:02:38.675 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.675 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:38.675 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:38.675 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:38.675 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:38.933 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.933 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.191 [96/743] Linking static target lib/librte_eal.a 00:02:39.191 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:39.191 [98/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:39.191 [99/743] Generating lib/rte_mbuf_def with a custom command 00:02:39.191 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:39.191 [101/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:39.449 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:39.449 [103/743] Linking static target lib/librte_rcu.a 00:02:39.449 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:39.449 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:39.707 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:39.707 [107/743] Linking static target lib/librte_mempool.a 00:02:39.707 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.707 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:39.707 [110/743] Generating lib/rte_net_def with a custom command 00:02:39.707 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:39.707 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:39.707 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:39.965 [114/743] Generating lib/rte_meter_def with a custom command 00:02:39.965 [115/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:39.965 [116/743] Generating lib/rte_meter_mingw with a custom command 00:02:39.965 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.965 [118/743] Linking static target lib/librte_meter.a 00:02:39.965 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:39.965 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:40.223 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:40.223 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.223 [123/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:40.223 [124/743] Linking static target lib/librte_net.a 00:02:40.223 [125/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.223 [126/743] Linking static target lib/librte_mbuf.a 00:02:40.481 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.481 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.481 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:40.739 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:40.739 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.739 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:40.739 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:40.997 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.997 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:41.255 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:41.255 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:41.255 [138/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:41.255 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:41.513 [140/743] Generating lib/rte_pci_def with a custom command 00:02:41.513 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:41.513 [142/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:41.513 [143/743] Linking static target lib/librte_pci.a 00:02:41.513 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:41.513 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:41.513 [146/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:41.513 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:41.513 [148/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.771 [149/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:41.771 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:41.771 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:41.771 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:41.771 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:41.771 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:41.771 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:41.771 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:41.771 [157/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:41.771 [158/743] Generating lib/rte_cmdline_def with a custom command 00:02:41.771 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:41.771 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:41.771 [161/743] Generating lib/rte_metrics_mingw with a custom command 00:02:42.029 [162/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.029 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:42.029 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:42.029 [165/743] Generating lib/rte_hash_def with a custom command 00:02:42.029 [166/743] Generating lib/rte_hash_mingw with a custom command 00:02:42.029 [167/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:42.029 [168/743] Generating lib/rte_timer_def with a custom command 00:02:42.029 [169/743] Generating lib/rte_timer_mingw with a custom command 00:02:42.029 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:42.287 [171/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:42.287 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:42.287 [173/743] Linking static target lib/librte_cmdline.a 00:02:42.545 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:42.545 [175/743] Linking static target lib/librte_metrics.a 00:02:42.545 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:42.545 [177/743] Linking static target lib/librte_timer.a 00:02:42.803 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.803 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.061 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:43.061 [181/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:43.061 [182/743] Linking static target lib/librte_ethdev.a 00:02:43.061 [183/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:43.061 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.625 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:43.625 [186/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:43.625 [187/743] Generating lib/rte_acl_def with a custom command 00:02:43.625 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:43.625 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:43.625 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:43.625 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:43.625 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:43.895 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:44.167 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:44.426 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:44.426 [196/743] Linking static target lib/librte_bitratestats.a 00:02:44.426 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:44.683 [198/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:44.683 [199/743] Linking static target lib/librte_bbdev.a 00:02:44.683 [200/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.683 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:44.941 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:44.941 [203/743] Linking static target lib/librte_hash.a 00:02:44.941 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:45.199 [205/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:45.199 [206/743] Linking static target lib/acl/libavx512_tmp.a 00:02:45.199 [207/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.199 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:45.199 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:45.457 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.457 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:45.457 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:45.714 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:45.714 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:02:45.714 [215/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:45.714 [216/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:45.714 [217/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:45.714 [218/743] Linking static target lib/librte_cfgfile.a 00:02:45.714 [219/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:45.972 [220/743] Linking static target lib/librte_acl.a 00:02:45.972 [221/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:45.972 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:45.972 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:45.972 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:46.229 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.229 [226/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.229 [227/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.230 [228/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:46.230 [229/743] Linking target lib/librte_eal.so.23.0 00:02:46.230 [230/743] Generating lib/rte_cryptodev_def with a custom command 00:02:46.230 [231/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:46.230 [232/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:46.487 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:46.488 [234/743] Linking target lib/librte_ring.so.23.0 00:02:46.488 [235/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:46.488 [236/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:46.488 [237/743] Linking target lib/librte_meter.so.23.0 00:02:46.488 [238/743] Linking target lib/librte_pci.so.23.0 00:02:46.488 [239/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:46.488 [240/743] Linking target lib/librte_rcu.so.23.0 00:02:46.488 [241/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:46.745 [242/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:46.746 [243/743] Linking target lib/librte_mempool.so.23.0 00:02:46.746 [244/743] Linking target lib/librte_timer.so.23.0 00:02:46.746 [245/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:46.746 [246/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:46.746 [247/743] Linking target lib/librte_acl.so.23.0 00:02:46.746 [248/743] Linking static target lib/librte_bpf.a 00:02:46.746 [249/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:46.746 [250/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:46.746 [251/743] Linking target lib/librte_cfgfile.so.23.0 00:02:46.746 [252/743] Linking static target lib/librte_compressdev.a 00:02:46.746 [253/743] Linking target lib/librte_mbuf.so.23.0 00:02:46.746 [254/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:46.746 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:47.004 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:47.004 [257/743] Generating lib/rte_distributor_mingw with a custom command 00:02:47.004 [258/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:47.004 [259/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:47.004 [260/743] Linking target lib/librte_bbdev.so.23.0 00:02:47.004 [261/743] Linking target lib/librte_net.so.23.0 00:02:47.004 [262/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:47.004 [263/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.004 [264/743] Generating lib/rte_efd_def with a custom command 00:02:47.004 [265/743] Generating lib/rte_efd_mingw with a custom command 00:02:47.262 [266/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:47.262 [267/743] Linking target lib/librte_cmdline.so.23.0 00:02:47.262 [268/743] Linking target lib/librte_hash.so.23.0 00:02:47.262 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:47.262 [270/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:47.520 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:47.520 [272/743] Linking static target lib/librte_distributor.a 00:02:47.520 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.779 [274/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.779 [275/743] Linking target lib/librte_ethdev.so.23.0 00:02:47.779 [276/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.779 [277/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:47.779 [278/743] Linking target lib/librte_compressdev.so.23.0 00:02:47.779 [279/743] Linking target lib/librte_distributor.so.23.0 00:02:47.779 [280/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:47.779 [281/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:47.779 [282/743] Generating lib/rte_eventdev_def with a custom command 00:02:47.779 [283/743] Linking target lib/librte_metrics.so.23.0 00:02:47.779 [284/743] Linking target lib/librte_bpf.so.23.0 00:02:48.037 [285/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:48.037 [286/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:48.037 [287/743] Linking target lib/librte_bitratestats.so.23.0 00:02:48.037 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:48.037 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:48.037 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:48.295 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:48.554 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:48.554 [293/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:48.554 [294/743] Linking static target lib/librte_efd.a 00:02:48.554 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.554 [296/743] Linking static target lib/librte_cryptodev.a 00:02:48.812 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.812 [298/743] Linking target lib/librte_efd.so.23.0 00:02:48.812 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:48.812 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:48.812 [301/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:48.812 [302/743] Linking static target lib/librte_gpudev.a 00:02:48.812 [303/743] Generating lib/rte_gro_def with a custom command 00:02:49.070 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:49.070 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:49.070 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:49.329 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:49.329 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:49.587 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:49.587 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:49.587 [311/743] Generating lib/rte_gso_def with a custom command 00:02:49.587 [312/743] Generating lib/rte_gso_mingw with a custom command 00:02:49.587 [313/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.587 [314/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:49.845 [315/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:49.845 [316/743] Linking static target lib/librte_gro.a 00:02:49.845 [317/743] Linking target lib/librte_gpudev.so.23.0 00:02:49.845 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:49.845 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:49.845 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:50.103 [321/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.103 [322/743] Linking target lib/librte_gro.so.23.0 00:02:50.103 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:50.103 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:50.103 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:50.103 [326/743] Linking static target lib/librte_eventdev.a 00:02:50.103 [327/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:50.103 [328/743] Linking static target lib/librte_gso.a 00:02:50.362 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:50.362 [330/743] Linking static target lib/librte_jobstats.a 00:02:50.362 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:50.362 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:50.362 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.362 [334/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:50.362 [335/743] Linking target lib/librte_gso.so.23.0 00:02:50.362 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:50.362 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:50.620 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:50.620 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:50.620 [340/743] Generating lib/rte_lpm_def with a custom command 00:02:50.620 [341/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:50.620 [342/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.620 [343/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.620 [344/743] Generating lib/rte_lpm_mingw with a custom command 00:02:50.620 [345/743] Linking target lib/librte_jobstats.so.23.0 00:02:50.620 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:50.620 [347/743] Linking target lib/librte_cryptodev.so.23.0 00:02:50.620 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:50.620 [349/743] Linking static target lib/librte_ip_frag.a 00:02:50.881 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:51.139 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.139 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:51.139 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:51.139 [354/743] Linking static target lib/librte_latencystats.a 00:02:51.139 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:51.139 [356/743] Generating lib/rte_member_def with a custom command 00:02:51.398 [357/743] Generating lib/rte_member_mingw with a custom command 00:02:51.398 [358/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:51.398 [359/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:51.398 [360/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:51.398 [361/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:51.398 [362/743] Generating lib/rte_pcapng_def with a custom command 00:02:51.398 [363/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:51.399 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.399 [365/743] Linking target lib/librte_latencystats.so.23.0 00:02:51.399 [366/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.399 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.657 [368/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:51.657 [369/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.657 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:51.915 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:51.915 [372/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:51.915 [373/743] Linking static target lib/librte_lpm.a 00:02:51.915 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:51.915 [375/743] Generating lib/rte_power_def with a custom command 00:02:51.915 [376/743] Generating lib/rte_power_mingw with a custom command 00:02:52.173 [377/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.173 [378/743] Generating lib/rte_rawdev_def with a custom command 00:02:52.173 [379/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.173 [380/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:52.173 [381/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.173 [382/743] Linking target lib/librte_eventdev.so.23.0 00:02:52.173 [383/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.173 [384/743] Linking target lib/librte_lpm.so.23.0 00:02:52.173 [385/743] Generating lib/rte_regexdev_def with a custom command 00:02:52.173 [386/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:52.432 [387/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:52.432 [388/743] Linking static target lib/librte_pcapng.a 00:02:52.432 [389/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:52.432 [390/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:52.432 [391/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:52.432 [392/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:52.432 [393/743] Generating lib/rte_dmadev_def with a custom command 00:02:52.432 [394/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:52.432 [395/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:52.432 [396/743] Linking static target lib/librte_rawdev.a 00:02:52.432 [397/743] Generating lib/rte_rib_def with a custom command 00:02:52.432 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:52.432 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:52.432 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:52.690 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.690 [402/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:52.690 [403/743] Linking static target lib/librte_power.a 00:02:52.690 [404/743] Linking target lib/librte_pcapng.so.23.0 00:02:52.690 [405/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.690 [406/743] Linking static target lib/librte_dmadev.a 00:02:52.690 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:52.690 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.949 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:52.949 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:52.949 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:52.949 [412/743] Linking static target lib/librte_regexdev.a 00:02:52.949 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:52.949 [414/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:52.949 [415/743] Generating lib/rte_sched_def with a custom command 00:02:52.949 [416/743] Generating lib/rte_sched_mingw with a custom command 00:02:52.949 [417/743] Generating lib/rte_security_def with a custom command 00:02:52.949 [418/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:52.949 [419/743] Linking static target lib/librte_member.a 00:02:52.949 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:53.213 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:53.213 [422/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:53.213 [423/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.213 [424/743] Linking static target lib/librte_reorder.a 00:02:53.213 [425/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.213 [426/743] Linking target lib/librte_dmadev.so.23.0 00:02:53.213 [427/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:53.213 [428/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:53.213 [429/743] Linking static target lib/librte_stack.a 00:02:53.213 [430/743] Generating lib/rte_stack_def with a custom command 00:02:53.213 [431/743] Generating lib/rte_stack_mingw with a custom command 00:02:53.471 [432/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:53.471 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.471 [434/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.471 [435/743] Linking target lib/librte_member.so.23.0 00:02:53.471 [436/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.471 [437/743] Linking target lib/librte_reorder.so.23.0 00:02:53.471 [438/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.471 [439/743] Linking target lib/librte_stack.so.23.0 00:02:53.471 [440/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.471 [441/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:53.471 [442/743] Linking static target lib/librte_rib.a 00:02:53.730 [443/743] Linking target lib/librte_power.so.23.0 00:02:53.730 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.730 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:53.730 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.730 [447/743] Linking static target lib/librte_security.a 00:02:53.988 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.988 [449/743] Linking target lib/librte_rib.so.23.0 00:02:54.247 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:54.247 [451/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.247 [452/743] Generating lib/rte_vhost_def with a custom command 00:02:54.247 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:54.247 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:54.247 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.247 [456/743] Linking target lib/librte_security.so.23.0 00:02:54.506 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.506 [458/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:54.506 [459/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:54.506 [460/743] Linking static target lib/librte_sched.a 00:02:54.765 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.765 [462/743] Linking target lib/librte_sched.so.23.0 00:02:55.024 [463/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:55.024 [464/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:55.024 [465/743] Generating lib/rte_ipsec_def with a custom command 00:02:55.024 [466/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:55.024 [467/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:55.024 [468/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:55.024 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.283 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:55.283 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:55.542 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:55.542 [473/743] Generating lib/rte_fib_def with a custom command 00:02:55.801 [474/743] Generating lib/rte_fib_mingw with a custom command 00:02:55.801 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:55.801 [476/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:55.801 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:55.801 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:55.801 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:55.801 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:56.059 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:56.059 [482/743] Linking static target lib/librte_ipsec.a 00:02:56.318 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.318 [484/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:56.318 [485/743] Linking target lib/librte_ipsec.so.23.0 00:02:56.577 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:56.577 [487/743] Linking static target lib/librte_fib.a 00:02:56.577 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:56.577 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:56.577 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:56.577 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:56.835 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.835 [493/743] Linking target lib/librte_fib.so.23.0 00:02:57.094 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:57.662 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:57.662 [496/743] Generating lib/rte_port_def with a custom command 00:02:57.662 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:57.662 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:57.662 [499/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:57.662 [500/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:57.662 [501/743] Generating lib/rte_pdump_def with a custom command 00:02:57.662 [502/743] Generating lib/rte_pdump_mingw with a custom command 00:02:57.662 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:57.936 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:57.936 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:57.936 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:57.936 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:58.207 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:58.207 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:58.207 [510/743] Linking static target lib/librte_port.a 00:02:58.466 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:58.466 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:58.725 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.725 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:58.725 [515/743] Linking target lib/librte_port.so.23.0 00:02:58.725 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:58.725 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:58.725 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:58.725 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:58.725 [520/743] Linking static target lib/librte_pdump.a 00:02:58.984 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.984 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:59.243 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:59.243 [524/743] Generating lib/rte_table_def with a custom command 00:02:59.243 [525/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.243 [526/743] Generating lib/rte_table_mingw with a custom command 00:02:59.243 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:59.502 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:59.502 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:59.761 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:59.761 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:59.761 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:59.761 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:03:00.020 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:00.020 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:00.020 [536/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:00.020 [537/743] Linking static target lib/librte_table.a 00:03:00.279 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:00.538 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:00.538 [540/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:00.538 [541/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.538 [542/743] Linking target lib/librte_table.so.23.0 00:03:00.797 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:00.797 [544/743] Generating lib/rte_graph_def with a custom command 00:03:00.797 [545/743] Generating lib/rte_graph_mingw with a custom command 00:03:00.797 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:00.797 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:01.056 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:01.315 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:01.315 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:01.315 [551/743] Linking static target lib/librte_graph.a 00:03:01.315 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:01.574 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:01.574 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:01.574 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:02.142 [556/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.142 [557/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:02.142 [558/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:02.142 [559/743] Generating lib/rte_node_def with a custom command 00:03:02.142 [560/743] Linking target lib/librte_graph.so.23.0 00:03:02.142 [561/743] Generating lib/rte_node_mingw with a custom command 00:03:02.142 [562/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:02.142 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:02.142 [564/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:02.142 [565/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:02.400 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:02.400 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:02.400 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:02.401 [569/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:02.401 [570/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:02.401 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:02.401 [572/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:02.401 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:02.401 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:02.660 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:02.660 [576/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:02.660 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:02.660 [578/743] Linking static target lib/librte_node.a 00:03:02.660 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:02.660 [580/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:02.660 [581/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:02.660 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.919 [583/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:02.919 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.919 [585/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:02.919 [586/743] Linking target lib/librte_node.so.23.0 00:03:02.919 [587/743] Linking static target drivers/librte_bus_vdev.a 00:03:02.919 [588/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:02.919 [589/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.919 [590/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:02.919 [591/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.919 [592/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.185 [593/743] Linking static target drivers/librte_bus_pci.a 00:03:03.185 [594/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.185 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:03.185 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:03.450 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.450 [598/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:03.450 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:03.450 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:03.450 [601/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:03.708 [602/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:03.708 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:03.708 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:03.967 [605/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:03.967 [606/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:03.967 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.967 [608/743] Linking static target drivers/librte_mempool_ring.a 00:03:03.967 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.967 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:04.534 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:04.793 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:04.793 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:04.793 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:05.361 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:05.361 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:05.361 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:05.928 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:05.928 [619/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:05.928 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:06.187 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:06.187 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:06.187 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:06.187 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:06.445 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:07.383 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:07.383 [627/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:07.383 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:07.642 [629/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:07.642 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:07.642 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:07.642 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:07.642 [633/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:07.642 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:07.901 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:08.159 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:08.425 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:08.425 [638/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:08.425 [639/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:08.692 [640/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:08.692 [641/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:08.692 [642/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:08.692 [643/743] Linking static target drivers/librte_net_i40e.a 00:03:08.692 [644/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:08.692 [645/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:08.950 [646/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:08.950 [647/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:08.950 [648/743] Linking static target lib/librte_vhost.a 00:03:09.209 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:09.209 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:09.468 [651/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.468 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:09.468 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:09.468 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:09.727 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:09.727 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:09.985 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:10.244 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.244 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:10.244 [660/743] Linking target lib/librte_vhost.so.23.0 00:03:10.244 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:10.244 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:10.244 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:10.502 [664/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:10.502 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:10.502 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:10.760 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:10.760 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:10.760 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:11.018 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:11.018 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:11.276 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:11.276 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:11.537 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:12.104 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:12.104 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:12.104 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:12.362 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:12.362 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:12.362 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:12.362 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:12.632 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:12.632 [683/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:12.903 [684/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:12.903 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:12.903 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:12.903 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:13.162 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:13.420 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:13.420 [690/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:13.420 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:13.420 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:13.420 [693/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:13.420 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:13.987 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:13.987 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:13.987 [697/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:13.987 [698/743] Linking static target lib/librte_pipeline.a 00:03:14.246 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:14.505 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:14.505 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:14.505 [702/743] Linking target app/dpdk-dumpcap 00:03:14.764 [703/743] Linking target app/dpdk-pdump 00:03:14.764 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:14.764 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:15.024 [706/743] Linking target app/dpdk-proc-info 00:03:15.024 [707/743] Linking target app/dpdk-test-acl 00:03:15.283 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:15.283 [709/743] Linking target app/dpdk-test-bbdev 00:03:15.283 [710/743] Linking target app/dpdk-test-cmdline 00:03:15.283 [711/743] Linking target app/dpdk-test-compress-perf 00:03:15.283 [712/743] Linking target app/dpdk-test-crypto-perf 00:03:15.541 [713/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:15.541 [714/743] Linking target app/dpdk-test-eventdev 00:03:15.541 [715/743] Linking target app/dpdk-test-fib 00:03:15.801 [716/743] Linking target app/dpdk-test-flow-perf 00:03:15.801 [717/743] Linking target app/dpdk-test-gpudev 00:03:15.801 [718/743] Linking target app/dpdk-test-pipeline 00:03:16.060 [719/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:16.319 [720/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:16.319 [721/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:16.578 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:16.578 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:16.578 [724/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.578 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:16.578 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:16.578 [727/743] Linking target lib/librte_pipeline.so.23.0 00:03:16.837 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:16.837 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:17.096 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:17.355 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:17.355 [732/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:17.614 [733/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:17.614 [734/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:17.614 [735/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:17.873 [736/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:17.873 [737/743] Linking target app/dpdk-test-sad 00:03:17.873 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:18.132 [739/743] Linking target app/dpdk-test-regex 00:03:18.132 [740/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:18.390 [741/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:18.649 [742/743] Linking target app/dpdk-test-security-perf 00:03:18.908 [743/743] Linking target app/dpdk-testpmd 00:03:18.908 04:54:18 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:18.908 04:54:18 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:18.908 04:54:18 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:18.908 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:18.908 [0/1] Installing files. 00:03:19.168 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.168 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:19.173 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.173 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.173 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.173 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.173 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.173 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:19.173 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:19.173 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:19.173 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:19.173 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:19.173 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.173 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.433 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:19.434 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:19.434 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:19.434 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.434 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:19.434 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.434 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.434 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.434 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.434 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.434 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.434 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.434 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.434 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.696 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.696 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.696 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.696 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.696 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.696 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.696 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.696 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.696 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.697 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.698 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:19.699 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:19.699 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:19.699 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:19.699 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:19.699 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:19.699 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:19.699 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:19.699 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:19.699 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:19.699 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:19.699 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:19.699 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:19.699 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:19.699 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:19.699 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:19.699 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:19.699 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:19.699 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:19.699 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:19.699 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:19.699 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:19.699 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:19.699 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:19.699 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:19.699 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:19.699 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:19.699 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:19.699 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:19.699 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:19.699 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:19.699 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:19.699 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:19.699 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:19.699 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:19.699 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:19.699 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:19.699 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:19.699 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:19.699 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:19.699 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:19.699 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:19.699 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:19.699 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:19.699 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:19.699 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:19.699 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:19.699 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:19.699 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:19.699 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:19.699 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:19.699 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:19.699 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:19.699 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:19.699 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:19.699 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:19.699 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:19.699 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:19.699 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:19.699 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:19.699 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:19.699 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:19.699 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:19.699 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:19.699 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:19.699 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:19.699 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:19.699 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:19.699 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:19.699 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:19.699 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:19.699 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:19.699 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:19.699 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:19.699 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:19.699 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:19.699 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:19.699 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:19.699 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:19.699 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:19.699 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:19.699 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:19.699 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:19.699 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:19.699 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:19.699 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:19.699 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:19.699 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:19.699 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:19.699 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:19.699 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:19.699 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:19.699 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:19.700 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:19.700 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:19.700 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:19.700 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:19.700 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:19.700 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:19.700 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:19.700 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:19.700 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:19.700 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:19.700 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:19.700 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:19.700 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:19.700 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:19.700 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:19.700 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:19.700 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:19.700 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:19.700 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:19.700 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:19.700 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:19.700 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:19.700 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:19.700 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:19.700 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:19.700 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:19.700 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:19.700 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:19.700 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:19.700 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:19.700 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:19.700 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:19.700 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:19.700 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:19.700 04:54:19 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:03:19.700 04:54:19 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:19.700 00:03:19.700 real 0m49.326s 00:03:19.700 user 5m47.997s 00:03:19.700 sys 0m58.427s 00:03:19.700 04:54:19 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:19.700 04:54:19 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:19.700 ************************************ 00:03:19.700 END TEST build_native_dpdk 00:03:19.700 ************************************ 00:03:19.700 04:54:19 -- common/autotest_common.sh@1142 -- $ return 0 00:03:19.700 04:54:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:19.700 04:54:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:19.700 04:54:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:19.700 04:54:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:19.700 04:54:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:19.700 04:54:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:19.700 04:54:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:19.700 04:54:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:19.959 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:19.959 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:19.959 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:19.959 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:20.526 Using 'verbs' RDMA provider 00:03:34.119 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:48.995 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:48.995 Creating mk/config.mk...done. 00:03:48.995 Creating mk/cc.flags.mk...done. 00:03:48.995 Type 'make' to build. 00:03:48.995 04:54:47 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:48.995 04:54:47 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:48.995 04:54:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:48.995 04:54:47 -- common/autotest_common.sh@10 -- $ set +x 00:03:48.995 ************************************ 00:03:48.995 START TEST make 00:03:48.995 ************************************ 00:03:48.995 04:54:47 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:48.995 make[1]: Nothing to be done for 'all'. 00:04:10.934 CC lib/ut_mock/mock.o 00:04:10.934 CC lib/ut/ut.o 00:04:10.934 CC lib/log/log_flags.o 00:04:10.934 CC lib/log/log_deprecated.o 00:04:10.934 CC lib/log/log.o 00:04:10.934 LIB libspdk_ut.a 00:04:10.934 SO libspdk_ut.so.2.0 00:04:10.934 LIB libspdk_log.a 00:04:10.934 LIB libspdk_ut_mock.a 00:04:10.934 SO libspdk_ut_mock.so.6.0 00:04:10.934 SO libspdk_log.so.7.0 00:04:10.934 SYMLINK libspdk_ut.so 00:04:10.934 SYMLINK libspdk_ut_mock.so 00:04:10.934 SYMLINK libspdk_log.so 00:04:10.934 CC lib/dma/dma.o 00:04:10.934 CXX lib/trace_parser/trace.o 00:04:10.934 CC lib/util/base64.o 00:04:10.934 CC lib/util/bit_array.o 00:04:10.934 CC lib/util/cpuset.o 00:04:10.934 CC lib/util/crc16.o 00:04:10.934 CC lib/ioat/ioat.o 00:04:10.934 CC lib/util/crc32.o 00:04:10.934 CC lib/util/crc32c.o 00:04:10.934 CC lib/vfio_user/host/vfio_user_pci.o 00:04:10.934 CC lib/vfio_user/host/vfio_user.o 00:04:10.934 CC lib/util/crc32_ieee.o 00:04:10.934 CC lib/util/crc64.o 00:04:10.934 LIB libspdk_dma.a 00:04:10.934 CC lib/util/dif.o 00:04:10.934 SO libspdk_dma.so.4.0 00:04:10.934 CC lib/util/fd.o 00:04:10.934 CC lib/util/fd_group.o 00:04:10.934 SYMLINK libspdk_dma.so 00:04:10.934 CC lib/util/file.o 00:04:10.934 CC lib/util/hexlify.o 00:04:10.934 CC lib/util/iov.o 00:04:10.934 LIB libspdk_ioat.a 00:04:10.934 SO libspdk_ioat.so.7.0 00:04:10.934 CC lib/util/math.o 00:04:10.934 CC lib/util/net.o 00:04:10.934 LIB libspdk_vfio_user.a 00:04:10.934 SYMLINK libspdk_ioat.so 00:04:10.934 CC lib/util/pipe.o 00:04:10.934 CC lib/util/strerror_tls.o 00:04:10.934 SO libspdk_vfio_user.so.5.0 00:04:10.934 CC lib/util/string.o 00:04:10.934 CC lib/util/uuid.o 00:04:10.934 SYMLINK libspdk_vfio_user.so 00:04:10.934 CC lib/util/xor.o 00:04:10.934 CC lib/util/zipf.o 00:04:10.934 LIB libspdk_util.a 00:04:10.934 SO libspdk_util.so.10.0 00:04:10.934 LIB libspdk_trace_parser.a 00:04:10.934 SYMLINK libspdk_util.so 00:04:10.934 SO libspdk_trace_parser.so.5.0 00:04:10.934 SYMLINK libspdk_trace_parser.so 00:04:10.934 CC lib/vmd/vmd.o 00:04:10.934 CC lib/vmd/led.o 00:04:10.934 CC lib/json/json_parse.o 00:04:10.934 CC lib/rdma_utils/rdma_utils.o 00:04:10.934 CC lib/json/json_util.o 00:04:10.934 CC lib/json/json_write.o 00:04:10.934 CC lib/conf/conf.o 00:04:10.934 CC lib/rdma_provider/common.o 00:04:10.934 CC lib/idxd/idxd.o 00:04:10.934 CC lib/env_dpdk/env.o 00:04:10.934 CC lib/env_dpdk/memory.o 00:04:10.934 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:10.934 LIB libspdk_conf.a 00:04:10.934 CC lib/env_dpdk/pci.o 00:04:10.934 CC lib/env_dpdk/init.o 00:04:10.934 SO libspdk_conf.so.6.0 00:04:10.934 LIB libspdk_rdma_utils.a 00:04:10.934 LIB libspdk_json.a 00:04:10.934 SO libspdk_rdma_utils.so.1.0 00:04:10.934 SYMLINK libspdk_conf.so 00:04:10.934 CC lib/env_dpdk/threads.o 00:04:10.934 SO libspdk_json.so.6.0 00:04:10.934 SYMLINK libspdk_rdma_utils.so 00:04:10.934 CC lib/env_dpdk/pci_ioat.o 00:04:10.934 SYMLINK libspdk_json.so 00:04:10.934 CC lib/env_dpdk/pci_virtio.o 00:04:10.934 LIB libspdk_rdma_provider.a 00:04:10.934 SO libspdk_rdma_provider.so.6.0 00:04:10.934 CC lib/env_dpdk/pci_vmd.o 00:04:10.934 CC lib/env_dpdk/pci_idxd.o 00:04:10.934 SYMLINK libspdk_rdma_provider.so 00:04:10.934 CC lib/env_dpdk/pci_event.o 00:04:10.934 CC lib/idxd/idxd_user.o 00:04:10.934 CC lib/env_dpdk/sigbus_handler.o 00:04:10.934 CC lib/env_dpdk/pci_dpdk.o 00:04:10.934 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:10.934 LIB libspdk_vmd.a 00:04:10.934 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:10.934 CC lib/idxd/idxd_kernel.o 00:04:10.934 SO libspdk_vmd.so.6.0 00:04:11.193 CC lib/jsonrpc/jsonrpc_server.o 00:04:11.193 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:11.193 CC lib/jsonrpc/jsonrpc_client.o 00:04:11.193 SYMLINK libspdk_vmd.so 00:04:11.193 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:11.193 LIB libspdk_idxd.a 00:04:11.193 SO libspdk_idxd.so.12.0 00:04:11.451 SYMLINK libspdk_idxd.so 00:04:11.451 LIB libspdk_jsonrpc.a 00:04:11.451 SO libspdk_jsonrpc.so.6.0 00:04:11.451 SYMLINK libspdk_jsonrpc.so 00:04:11.709 CC lib/rpc/rpc.o 00:04:11.709 LIB libspdk_env_dpdk.a 00:04:11.967 SO libspdk_env_dpdk.so.15.0 00:04:11.967 LIB libspdk_rpc.a 00:04:11.967 SO libspdk_rpc.so.6.0 00:04:11.967 SYMLINK libspdk_env_dpdk.so 00:04:11.967 SYMLINK libspdk_rpc.so 00:04:12.225 CC lib/notify/notify.o 00:04:12.225 CC lib/notify/notify_rpc.o 00:04:12.225 CC lib/keyring/keyring_rpc.o 00:04:12.225 CC lib/keyring/keyring.o 00:04:12.225 CC lib/trace/trace.o 00:04:12.225 CC lib/trace/trace_flags.o 00:04:12.225 CC lib/trace/trace_rpc.o 00:04:12.484 LIB libspdk_notify.a 00:04:12.484 SO libspdk_notify.so.6.0 00:04:12.484 LIB libspdk_keyring.a 00:04:12.484 SYMLINK libspdk_notify.so 00:04:12.484 LIB libspdk_trace.a 00:04:12.484 SO libspdk_keyring.so.1.0 00:04:12.484 SO libspdk_trace.so.10.0 00:04:12.741 SYMLINK libspdk_keyring.so 00:04:12.741 SYMLINK libspdk_trace.so 00:04:13.000 CC lib/sock/sock.o 00:04:13.000 CC lib/sock/sock_rpc.o 00:04:13.000 CC lib/thread/thread.o 00:04:13.000 CC lib/thread/iobuf.o 00:04:13.574 LIB libspdk_sock.a 00:04:13.574 SO libspdk_sock.so.10.0 00:04:13.574 SYMLINK libspdk_sock.so 00:04:13.833 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:13.833 CC lib/nvme/nvme_ctrlr.o 00:04:13.833 CC lib/nvme/nvme_ns_cmd.o 00:04:13.833 CC lib/nvme/nvme_fabric.o 00:04:13.833 CC lib/nvme/nvme_ns.o 00:04:13.833 CC lib/nvme/nvme_pcie_common.o 00:04:13.833 CC lib/nvme/nvme_qpair.o 00:04:13.833 CC lib/nvme/nvme_pcie.o 00:04:13.833 CC lib/nvme/nvme.o 00:04:14.399 LIB libspdk_thread.a 00:04:14.657 SO libspdk_thread.so.10.1 00:04:14.657 SYMLINK libspdk_thread.so 00:04:14.657 CC lib/nvme/nvme_quirks.o 00:04:14.657 CC lib/nvme/nvme_transport.o 00:04:14.657 CC lib/nvme/nvme_discovery.o 00:04:14.657 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:14.657 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:14.657 CC lib/nvme/nvme_tcp.o 00:04:14.915 CC lib/nvme/nvme_opal.o 00:04:14.915 CC lib/nvme/nvme_io_msg.o 00:04:14.915 CC lib/nvme/nvme_poll_group.o 00:04:15.173 CC lib/nvme/nvme_zns.o 00:04:15.173 CC lib/nvme/nvme_stubs.o 00:04:15.173 CC lib/nvme/nvme_auth.o 00:04:15.431 CC lib/nvme/nvme_cuse.o 00:04:15.431 CC lib/nvme/nvme_rdma.o 00:04:15.689 CC lib/accel/accel.o 00:04:15.689 CC lib/blob/blobstore.o 00:04:15.689 CC lib/accel/accel_rpc.o 00:04:15.948 CC lib/accel/accel_sw.o 00:04:15.948 CC lib/init/json_config.o 00:04:15.948 CC lib/virtio/virtio.o 00:04:16.206 CC lib/virtio/virtio_vhost_user.o 00:04:16.206 CC lib/init/subsystem.o 00:04:16.206 CC lib/init/subsystem_rpc.o 00:04:16.206 CC lib/init/rpc.o 00:04:16.206 CC lib/virtio/virtio_vfio_user.o 00:04:16.206 CC lib/blob/request.o 00:04:16.464 CC lib/blob/zeroes.o 00:04:16.464 CC lib/blob/blob_bs_dev.o 00:04:16.464 CC lib/virtio/virtio_pci.o 00:04:16.464 LIB libspdk_init.a 00:04:16.464 SO libspdk_init.so.5.0 00:04:16.464 LIB libspdk_accel.a 00:04:16.464 SYMLINK libspdk_init.so 00:04:16.723 SO libspdk_accel.so.16.0 00:04:16.723 SYMLINK libspdk_accel.so 00:04:16.723 LIB libspdk_virtio.a 00:04:16.723 LIB libspdk_nvme.a 00:04:16.723 SO libspdk_virtio.so.7.0 00:04:16.723 CC lib/event/app.o 00:04:16.723 CC lib/event/reactor.o 00:04:16.723 CC lib/event/log_rpc.o 00:04:16.723 CC lib/event/app_rpc.o 00:04:16.723 CC lib/event/scheduler_static.o 00:04:16.723 SYMLINK libspdk_virtio.so 00:04:16.983 CC lib/bdev/bdev.o 00:04:16.983 CC lib/bdev/bdev_rpc.o 00:04:16.983 CC lib/bdev/bdev_zone.o 00:04:16.983 SO libspdk_nvme.so.13.1 00:04:16.983 CC lib/bdev/part.o 00:04:16.983 CC lib/bdev/scsi_nvme.o 00:04:17.242 LIB libspdk_event.a 00:04:17.242 SO libspdk_event.so.14.0 00:04:17.242 SYMLINK libspdk_nvme.so 00:04:17.242 SYMLINK libspdk_event.so 00:04:18.618 LIB libspdk_blob.a 00:04:18.618 SO libspdk_blob.so.11.0 00:04:18.877 SYMLINK libspdk_blob.so 00:04:18.877 CC lib/lvol/lvol.o 00:04:19.135 CC lib/blobfs/blobfs.o 00:04:19.135 CC lib/blobfs/tree.o 00:04:19.135 LIB libspdk_bdev.a 00:04:19.393 SO libspdk_bdev.so.16.0 00:04:19.393 SYMLINK libspdk_bdev.so 00:04:19.651 CC lib/ublk/ublk.o 00:04:19.651 CC lib/ublk/ublk_rpc.o 00:04:19.651 CC lib/nvmf/ctrlr.o 00:04:19.651 CC lib/nvmf/ctrlr_discovery.o 00:04:19.651 CC lib/nvmf/ctrlr_bdev.o 00:04:19.651 CC lib/ftl/ftl_core.o 00:04:19.651 CC lib/scsi/dev.o 00:04:19.651 CC lib/nbd/nbd.o 00:04:19.909 CC lib/nbd/nbd_rpc.o 00:04:19.909 LIB libspdk_lvol.a 00:04:19.909 CC lib/scsi/lun.o 00:04:19.909 LIB libspdk_blobfs.a 00:04:19.909 SO libspdk_lvol.so.10.0 00:04:19.909 SO libspdk_blobfs.so.10.0 00:04:19.909 SYMLINK libspdk_lvol.so 00:04:19.909 CC lib/nvmf/subsystem.o 00:04:19.909 SYMLINK libspdk_blobfs.so 00:04:19.909 CC lib/ftl/ftl_init.o 00:04:20.167 CC lib/nvmf/nvmf.o 00:04:20.167 CC lib/ftl/ftl_layout.o 00:04:20.167 LIB libspdk_nbd.a 00:04:20.167 CC lib/ftl/ftl_debug.o 00:04:20.167 SO libspdk_nbd.so.7.0 00:04:20.167 SYMLINK libspdk_nbd.so 00:04:20.167 CC lib/ftl/ftl_io.o 00:04:20.167 CC lib/ftl/ftl_sb.o 00:04:20.167 CC lib/scsi/port.o 00:04:20.167 LIB libspdk_ublk.a 00:04:20.425 SO libspdk_ublk.so.3.0 00:04:20.425 CC lib/nvmf/nvmf_rpc.o 00:04:20.425 CC lib/nvmf/transport.o 00:04:20.425 SYMLINK libspdk_ublk.so 00:04:20.425 CC lib/nvmf/tcp.o 00:04:20.425 CC lib/ftl/ftl_l2p.o 00:04:20.425 CC lib/scsi/scsi.o 00:04:20.425 CC lib/scsi/scsi_bdev.o 00:04:20.425 CC lib/ftl/ftl_l2p_flat.o 00:04:20.683 CC lib/ftl/ftl_nv_cache.o 00:04:20.683 CC lib/ftl/ftl_band.o 00:04:20.683 CC lib/ftl/ftl_band_ops.o 00:04:20.942 CC lib/nvmf/stubs.o 00:04:20.942 CC lib/scsi/scsi_pr.o 00:04:20.942 CC lib/ftl/ftl_writer.o 00:04:20.942 CC lib/scsi/scsi_rpc.o 00:04:20.942 CC lib/nvmf/mdns_server.o 00:04:21.200 CC lib/nvmf/rdma.o 00:04:21.200 CC lib/scsi/task.o 00:04:21.201 CC lib/nvmf/auth.o 00:04:21.201 CC lib/ftl/ftl_rq.o 00:04:21.201 CC lib/ftl/ftl_reloc.o 00:04:21.201 CC lib/ftl/ftl_l2p_cache.o 00:04:21.201 CC lib/ftl/ftl_p2l.o 00:04:21.459 LIB libspdk_scsi.a 00:04:21.459 SO libspdk_scsi.so.9.0 00:04:21.459 CC lib/ftl/mngt/ftl_mngt.o 00:04:21.459 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:21.459 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:21.459 SYMLINK libspdk_scsi.so 00:04:21.718 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:21.718 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:21.718 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:21.718 CC lib/iscsi/conn.o 00:04:21.718 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:21.718 CC lib/vhost/vhost.o 00:04:21.976 CC lib/vhost/vhost_rpc.o 00:04:21.976 CC lib/vhost/vhost_scsi.o 00:04:21.976 CC lib/iscsi/init_grp.o 00:04:21.976 CC lib/vhost/vhost_blk.o 00:04:21.976 CC lib/vhost/rte_vhost_user.o 00:04:21.976 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:21.976 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:22.234 CC lib/iscsi/iscsi.o 00:04:22.234 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:22.234 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:22.492 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:22.492 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:22.492 CC lib/iscsi/md5.o 00:04:22.492 CC lib/ftl/utils/ftl_conf.o 00:04:22.492 CC lib/ftl/utils/ftl_md.o 00:04:22.751 CC lib/ftl/utils/ftl_mempool.o 00:04:22.751 CC lib/ftl/utils/ftl_bitmap.o 00:04:22.751 CC lib/ftl/utils/ftl_property.o 00:04:22.751 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:22.751 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:22.751 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:23.010 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:23.010 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:23.010 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:23.010 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:23.010 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:23.010 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:23.010 CC lib/iscsi/param.o 00:04:23.010 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:23.267 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:23.267 CC lib/ftl/base/ftl_base_dev.o 00:04:23.267 LIB libspdk_vhost.a 00:04:23.267 LIB libspdk_nvmf.a 00:04:23.267 CC lib/ftl/base/ftl_base_bdev.o 00:04:23.267 CC lib/iscsi/portal_grp.o 00:04:23.267 CC lib/ftl/ftl_trace.o 00:04:23.267 SO libspdk_vhost.so.8.0 00:04:23.267 SO libspdk_nvmf.so.19.0 00:04:23.525 CC lib/iscsi/tgt_node.o 00:04:23.526 SYMLINK libspdk_vhost.so 00:04:23.526 CC lib/iscsi/iscsi_subsystem.o 00:04:23.526 CC lib/iscsi/iscsi_rpc.o 00:04:23.526 CC lib/iscsi/task.o 00:04:23.526 LIB libspdk_ftl.a 00:04:23.526 SYMLINK libspdk_nvmf.so 00:04:23.786 SO libspdk_ftl.so.9.0 00:04:23.786 LIB libspdk_iscsi.a 00:04:24.049 SO libspdk_iscsi.so.8.0 00:04:24.307 SYMLINK libspdk_iscsi.so 00:04:24.307 SYMLINK libspdk_ftl.so 00:04:24.566 CC module/env_dpdk/env_dpdk_rpc.o 00:04:24.566 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:24.566 CC module/accel/iaa/accel_iaa.o 00:04:24.566 CC module/accel/dsa/accel_dsa.o 00:04:24.566 CC module/accel/ioat/accel_ioat.o 00:04:24.566 CC module/accel/error/accel_error.o 00:04:24.566 CC module/sock/posix/posix.o 00:04:24.824 CC module/blob/bdev/blob_bdev.o 00:04:24.824 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:24.824 CC module/keyring/file/keyring.o 00:04:24.824 LIB libspdk_env_dpdk_rpc.a 00:04:24.824 SO libspdk_env_dpdk_rpc.so.6.0 00:04:24.824 SYMLINK libspdk_env_dpdk_rpc.so 00:04:24.824 CC module/accel/ioat/accel_ioat_rpc.o 00:04:24.824 CC module/keyring/file/keyring_rpc.o 00:04:24.824 CC module/accel/error/accel_error_rpc.o 00:04:24.825 LIB libspdk_scheduler_dpdk_governor.a 00:04:24.825 CC module/accel/iaa/accel_iaa_rpc.o 00:04:24.825 LIB libspdk_scheduler_dynamic.a 00:04:24.825 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:24.825 SO libspdk_scheduler_dynamic.so.4.0 00:04:24.825 CC module/accel/dsa/accel_dsa_rpc.o 00:04:25.083 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:25.083 LIB libspdk_blob_bdev.a 00:04:25.083 LIB libspdk_accel_ioat.a 00:04:25.083 LIB libspdk_keyring_file.a 00:04:25.083 SYMLINK libspdk_scheduler_dynamic.so 00:04:25.083 LIB libspdk_accel_error.a 00:04:25.083 SO libspdk_blob_bdev.so.11.0 00:04:25.083 SO libspdk_accel_ioat.so.6.0 00:04:25.083 SO libspdk_keyring_file.so.1.0 00:04:25.083 LIB libspdk_accel_iaa.a 00:04:25.084 SO libspdk_accel_error.so.2.0 00:04:25.084 CC module/keyring/linux/keyring.o 00:04:25.084 CC module/keyring/linux/keyring_rpc.o 00:04:25.084 SO libspdk_accel_iaa.so.3.0 00:04:25.084 SYMLINK libspdk_blob_bdev.so 00:04:25.084 SYMLINK libspdk_accel_ioat.so 00:04:25.084 LIB libspdk_accel_dsa.a 00:04:25.084 SYMLINK libspdk_keyring_file.so 00:04:25.084 SYMLINK libspdk_accel_error.so 00:04:25.084 SO libspdk_accel_dsa.so.5.0 00:04:25.084 SYMLINK libspdk_accel_iaa.so 00:04:25.084 CC module/scheduler/gscheduler/gscheduler.o 00:04:25.084 SYMLINK libspdk_accel_dsa.so 00:04:25.342 LIB libspdk_keyring_linux.a 00:04:25.342 SO libspdk_keyring_linux.so.1.0 00:04:25.342 LIB libspdk_scheduler_gscheduler.a 00:04:25.342 SYMLINK libspdk_keyring_linux.so 00:04:25.342 SO libspdk_scheduler_gscheduler.so.4.0 00:04:25.342 CC module/bdev/delay/vbdev_delay.o 00:04:25.342 CC module/bdev/gpt/gpt.o 00:04:25.342 CC module/bdev/malloc/bdev_malloc.o 00:04:25.342 CC module/bdev/lvol/vbdev_lvol.o 00:04:25.342 CC module/bdev/error/vbdev_error.o 00:04:25.342 CC module/bdev/null/bdev_null.o 00:04:25.342 CC module/blobfs/bdev/blobfs_bdev.o 00:04:25.342 LIB libspdk_sock_posix.a 00:04:25.342 SYMLINK libspdk_scheduler_gscheduler.so 00:04:25.342 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:25.601 SO libspdk_sock_posix.so.6.0 00:04:25.601 CC module/bdev/nvme/bdev_nvme.o 00:04:25.601 SYMLINK libspdk_sock_posix.so 00:04:25.601 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:25.601 CC module/bdev/gpt/vbdev_gpt.o 00:04:25.601 CC module/bdev/nvme/nvme_rpc.o 00:04:25.601 LIB libspdk_blobfs_bdev.a 00:04:25.601 CC module/bdev/error/vbdev_error_rpc.o 00:04:25.601 CC module/bdev/null/bdev_null_rpc.o 00:04:25.601 SO libspdk_blobfs_bdev.so.6.0 00:04:25.860 SYMLINK libspdk_blobfs_bdev.so 00:04:25.860 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:25.860 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:25.860 CC module/bdev/nvme/bdev_mdns_client.o 00:04:25.860 LIB libspdk_bdev_null.a 00:04:25.860 LIB libspdk_bdev_gpt.a 00:04:25.860 LIB libspdk_bdev_error.a 00:04:25.860 SO libspdk_bdev_gpt.so.6.0 00:04:25.860 SO libspdk_bdev_null.so.6.0 00:04:25.860 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:25.860 SO libspdk_bdev_error.so.6.0 00:04:25.860 LIB libspdk_bdev_delay.a 00:04:25.860 CC module/bdev/passthru/vbdev_passthru.o 00:04:25.860 LIB libspdk_bdev_malloc.a 00:04:25.860 SO libspdk_bdev_delay.so.6.0 00:04:25.860 SYMLINK libspdk_bdev_null.so 00:04:25.860 SYMLINK libspdk_bdev_gpt.so 00:04:25.860 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:26.118 SO libspdk_bdev_malloc.so.6.0 00:04:26.118 SYMLINK libspdk_bdev_error.so 00:04:26.118 SYMLINK libspdk_bdev_malloc.so 00:04:26.118 SYMLINK libspdk_bdev_delay.so 00:04:26.118 CC module/bdev/nvme/vbdev_opal.o 00:04:26.118 CC module/bdev/raid/bdev_raid.o 00:04:26.118 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:26.118 CC module/bdev/split/vbdev_split.o 00:04:26.376 CC module/bdev/ftl/bdev_ftl.o 00:04:26.376 CC module/bdev/aio/bdev_aio.o 00:04:26.376 LIB libspdk_bdev_lvol.a 00:04:26.377 LIB libspdk_bdev_passthru.a 00:04:26.377 SO libspdk_bdev_passthru.so.6.0 00:04:26.377 SO libspdk_bdev_lvol.so.6.0 00:04:26.377 CC module/bdev/iscsi/bdev_iscsi.o 00:04:26.377 SYMLINK libspdk_bdev_passthru.so 00:04:26.377 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:26.377 SYMLINK libspdk_bdev_lvol.so 00:04:26.377 CC module/bdev/split/vbdev_split_rpc.o 00:04:26.377 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:26.377 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:26.635 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:26.635 LIB libspdk_bdev_split.a 00:04:26.635 CC module/bdev/aio/bdev_aio_rpc.o 00:04:26.635 SO libspdk_bdev_split.so.6.0 00:04:26.635 CC module/bdev/raid/bdev_raid_rpc.o 00:04:26.635 LIB libspdk_bdev_ftl.a 00:04:26.635 SYMLINK libspdk_bdev_split.so 00:04:26.635 CC module/bdev/raid/bdev_raid_sb.o 00:04:26.635 LIB libspdk_bdev_iscsi.a 00:04:26.635 LIB libspdk_bdev_zone_block.a 00:04:26.635 SO libspdk_bdev_ftl.so.6.0 00:04:26.635 CC module/bdev/rbd/bdev_rbd.o 00:04:26.635 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:26.894 SO libspdk_bdev_iscsi.so.6.0 00:04:26.894 SO libspdk_bdev_zone_block.so.6.0 00:04:26.894 LIB libspdk_bdev_aio.a 00:04:26.894 SYMLINK libspdk_bdev_ftl.so 00:04:26.894 SO libspdk_bdev_aio.so.6.0 00:04:26.894 SYMLINK libspdk_bdev_zone_block.so 00:04:26.894 SYMLINK libspdk_bdev_iscsi.so 00:04:26.894 CC module/bdev/rbd/bdev_rbd_rpc.o 00:04:26.894 CC module/bdev/raid/raid0.o 00:04:26.894 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:26.894 SYMLINK libspdk_bdev_aio.so 00:04:26.894 CC module/bdev/raid/raid1.o 00:04:26.894 CC module/bdev/raid/concat.o 00:04:26.894 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:27.152 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:27.152 LIB libspdk_bdev_rbd.a 00:04:27.152 LIB libspdk_bdev_raid.a 00:04:27.152 SO libspdk_bdev_rbd.so.7.0 00:04:27.152 SO libspdk_bdev_raid.so.6.0 00:04:27.410 LIB libspdk_bdev_virtio.a 00:04:27.410 SYMLINK libspdk_bdev_rbd.so 00:04:27.410 SO libspdk_bdev_virtio.so.6.0 00:04:27.410 SYMLINK libspdk_bdev_raid.so 00:04:27.410 SYMLINK libspdk_bdev_virtio.so 00:04:27.668 LIB libspdk_bdev_nvme.a 00:04:27.668 SO libspdk_bdev_nvme.so.7.0 00:04:27.926 SYMLINK libspdk_bdev_nvme.so 00:04:28.493 CC module/event/subsystems/iobuf/iobuf.o 00:04:28.493 CC module/event/subsystems/sock/sock.o 00:04:28.493 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:28.493 CC module/event/subsystems/vmd/vmd.o 00:04:28.493 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:28.493 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:28.493 CC module/event/subsystems/scheduler/scheduler.o 00:04:28.493 CC module/event/subsystems/keyring/keyring.o 00:04:28.493 LIB libspdk_event_vhost_blk.a 00:04:28.493 LIB libspdk_event_keyring.a 00:04:28.493 LIB libspdk_event_vmd.a 00:04:28.493 LIB libspdk_event_sock.a 00:04:28.493 SO libspdk_event_vhost_blk.so.3.0 00:04:28.493 SO libspdk_event_keyring.so.1.0 00:04:28.493 LIB libspdk_event_scheduler.a 00:04:28.493 LIB libspdk_event_iobuf.a 00:04:28.493 SO libspdk_event_scheduler.so.4.0 00:04:28.493 SO libspdk_event_vmd.so.6.0 00:04:28.493 SO libspdk_event_sock.so.5.0 00:04:28.493 SYMLINK libspdk_event_vhost_blk.so 00:04:28.493 SYMLINK libspdk_event_keyring.so 00:04:28.493 SO libspdk_event_iobuf.so.3.0 00:04:28.493 SYMLINK libspdk_event_sock.so 00:04:28.750 SYMLINK libspdk_event_scheduler.so 00:04:28.750 SYMLINK libspdk_event_vmd.so 00:04:28.750 SYMLINK libspdk_event_iobuf.so 00:04:29.008 CC module/event/subsystems/accel/accel.o 00:04:29.008 LIB libspdk_event_accel.a 00:04:29.008 SO libspdk_event_accel.so.6.0 00:04:29.267 SYMLINK libspdk_event_accel.so 00:04:29.526 CC module/event/subsystems/bdev/bdev.o 00:04:29.784 LIB libspdk_event_bdev.a 00:04:29.784 SO libspdk_event_bdev.so.6.0 00:04:29.784 SYMLINK libspdk_event_bdev.so 00:04:30.043 CC module/event/subsystems/scsi/scsi.o 00:04:30.043 CC module/event/subsystems/nbd/nbd.o 00:04:30.043 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:30.043 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:30.043 CC module/event/subsystems/ublk/ublk.o 00:04:30.302 LIB libspdk_event_nbd.a 00:04:30.302 LIB libspdk_event_ublk.a 00:04:30.302 LIB libspdk_event_scsi.a 00:04:30.302 SO libspdk_event_nbd.so.6.0 00:04:30.302 SO libspdk_event_ublk.so.3.0 00:04:30.302 SO libspdk_event_scsi.so.6.0 00:04:30.302 SYMLINK libspdk_event_scsi.so 00:04:30.302 SYMLINK libspdk_event_nbd.so 00:04:30.302 SYMLINK libspdk_event_ublk.so 00:04:30.302 LIB libspdk_event_nvmf.a 00:04:30.302 SO libspdk_event_nvmf.so.6.0 00:04:30.302 SYMLINK libspdk_event_nvmf.so 00:04:30.560 CC module/event/subsystems/iscsi/iscsi.o 00:04:30.560 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:30.827 LIB libspdk_event_vhost_scsi.a 00:04:30.827 LIB libspdk_event_iscsi.a 00:04:30.827 SO libspdk_event_vhost_scsi.so.3.0 00:04:30.827 SO libspdk_event_iscsi.so.6.0 00:04:30.827 SYMLINK libspdk_event_vhost_scsi.so 00:04:30.827 SYMLINK libspdk_event_iscsi.so 00:04:31.100 SO libspdk.so.6.0 00:04:31.100 SYMLINK libspdk.so 00:04:31.357 CC app/trace_record/trace_record.o 00:04:31.357 CXX app/trace/trace.o 00:04:31.357 CC app/spdk_nvme_perf/perf.o 00:04:31.357 CC app/spdk_lspci/spdk_lspci.o 00:04:31.357 CC app/nvmf_tgt/nvmf_main.o 00:04:31.357 CC app/iscsi_tgt/iscsi_tgt.o 00:04:31.357 CC app/spdk_tgt/spdk_tgt.o 00:04:31.357 CC examples/ioat/perf/perf.o 00:04:31.357 CC examples/util/zipf/zipf.o 00:04:31.357 CC test/thread/poller_perf/poller_perf.o 00:04:31.357 LINK spdk_lspci 00:04:31.615 LINK nvmf_tgt 00:04:31.615 LINK spdk_trace_record 00:04:31.615 LINK zipf 00:04:31.615 LINK iscsi_tgt 00:04:31.615 LINK poller_perf 00:04:31.615 LINK spdk_tgt 00:04:31.615 LINK ioat_perf 00:04:31.615 CC app/spdk_nvme_identify/identify.o 00:04:31.615 LINK spdk_trace 00:04:31.872 CC app/spdk_nvme_discover/discovery_aer.o 00:04:31.872 CC app/spdk_top/spdk_top.o 00:04:31.872 CC examples/ioat/verify/verify.o 00:04:31.872 CC app/spdk_dd/spdk_dd.o 00:04:31.872 CC test/dma/test_dma/test_dma.o 00:04:32.130 CC app/fio/nvme/fio_plugin.o 00:04:32.130 LINK spdk_nvme_discover 00:04:32.130 TEST_HEADER include/spdk/accel.h 00:04:32.130 TEST_HEADER include/spdk/accel_module.h 00:04:32.130 TEST_HEADER include/spdk/assert.h 00:04:32.130 TEST_HEADER include/spdk/barrier.h 00:04:32.130 TEST_HEADER include/spdk/base64.h 00:04:32.130 TEST_HEADER include/spdk/bdev.h 00:04:32.130 TEST_HEADER include/spdk/bdev_module.h 00:04:32.130 TEST_HEADER include/spdk/bdev_zone.h 00:04:32.130 TEST_HEADER include/spdk/bit_array.h 00:04:32.130 TEST_HEADER include/spdk/bit_pool.h 00:04:32.130 TEST_HEADER include/spdk/blob_bdev.h 00:04:32.130 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:32.130 TEST_HEADER include/spdk/blobfs.h 00:04:32.130 TEST_HEADER include/spdk/blob.h 00:04:32.130 TEST_HEADER include/spdk/conf.h 00:04:32.130 TEST_HEADER include/spdk/config.h 00:04:32.130 TEST_HEADER include/spdk/cpuset.h 00:04:32.130 TEST_HEADER include/spdk/crc16.h 00:04:32.130 TEST_HEADER include/spdk/crc32.h 00:04:32.130 TEST_HEADER include/spdk/crc64.h 00:04:32.130 TEST_HEADER include/spdk/dif.h 00:04:32.130 TEST_HEADER include/spdk/dma.h 00:04:32.130 TEST_HEADER include/spdk/endian.h 00:04:32.130 TEST_HEADER include/spdk/env_dpdk.h 00:04:32.130 TEST_HEADER include/spdk/env.h 00:04:32.130 TEST_HEADER include/spdk/event.h 00:04:32.130 TEST_HEADER include/spdk/fd_group.h 00:04:32.130 TEST_HEADER include/spdk/fd.h 00:04:32.130 TEST_HEADER include/spdk/file.h 00:04:32.130 TEST_HEADER include/spdk/ftl.h 00:04:32.130 TEST_HEADER include/spdk/gpt_spec.h 00:04:32.130 TEST_HEADER include/spdk/hexlify.h 00:04:32.130 TEST_HEADER include/spdk/histogram_data.h 00:04:32.130 TEST_HEADER include/spdk/idxd.h 00:04:32.130 TEST_HEADER include/spdk/idxd_spec.h 00:04:32.130 TEST_HEADER include/spdk/init.h 00:04:32.130 LINK verify 00:04:32.130 TEST_HEADER include/spdk/ioat.h 00:04:32.130 TEST_HEADER include/spdk/ioat_spec.h 00:04:32.130 CC test/app/bdev_svc/bdev_svc.o 00:04:32.130 TEST_HEADER include/spdk/iscsi_spec.h 00:04:32.130 TEST_HEADER include/spdk/json.h 00:04:32.130 TEST_HEADER include/spdk/jsonrpc.h 00:04:32.130 TEST_HEADER include/spdk/keyring.h 00:04:32.130 LINK spdk_nvme_perf 00:04:32.130 TEST_HEADER include/spdk/keyring_module.h 00:04:32.130 TEST_HEADER include/spdk/likely.h 00:04:32.130 TEST_HEADER include/spdk/log.h 00:04:32.130 TEST_HEADER include/spdk/lvol.h 00:04:32.130 TEST_HEADER include/spdk/memory.h 00:04:32.130 TEST_HEADER include/spdk/mmio.h 00:04:32.130 TEST_HEADER include/spdk/nbd.h 00:04:32.130 TEST_HEADER include/spdk/net.h 00:04:32.130 TEST_HEADER include/spdk/notify.h 00:04:32.130 TEST_HEADER include/spdk/nvme.h 00:04:32.130 TEST_HEADER include/spdk/nvme_intel.h 00:04:32.130 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:32.130 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:32.130 TEST_HEADER include/spdk/nvme_spec.h 00:04:32.130 TEST_HEADER include/spdk/nvme_zns.h 00:04:32.130 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:32.130 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:32.130 TEST_HEADER include/spdk/nvmf.h 00:04:32.130 TEST_HEADER include/spdk/nvmf_spec.h 00:04:32.130 TEST_HEADER include/spdk/nvmf_transport.h 00:04:32.130 TEST_HEADER include/spdk/opal.h 00:04:32.130 TEST_HEADER include/spdk/opal_spec.h 00:04:32.130 TEST_HEADER include/spdk/pci_ids.h 00:04:32.130 TEST_HEADER include/spdk/pipe.h 00:04:32.130 TEST_HEADER include/spdk/queue.h 00:04:32.130 TEST_HEADER include/spdk/reduce.h 00:04:32.130 TEST_HEADER include/spdk/rpc.h 00:04:32.130 TEST_HEADER include/spdk/scheduler.h 00:04:32.130 TEST_HEADER include/spdk/scsi.h 00:04:32.130 TEST_HEADER include/spdk/scsi_spec.h 00:04:32.130 TEST_HEADER include/spdk/sock.h 00:04:32.130 TEST_HEADER include/spdk/stdinc.h 00:04:32.130 TEST_HEADER include/spdk/string.h 00:04:32.130 TEST_HEADER include/spdk/thread.h 00:04:32.130 TEST_HEADER include/spdk/trace.h 00:04:32.130 TEST_HEADER include/spdk/trace_parser.h 00:04:32.130 TEST_HEADER include/spdk/tree.h 00:04:32.130 TEST_HEADER include/spdk/ublk.h 00:04:32.130 TEST_HEADER include/spdk/util.h 00:04:32.130 TEST_HEADER include/spdk/uuid.h 00:04:32.387 TEST_HEADER include/spdk/version.h 00:04:32.387 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:32.387 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:32.387 TEST_HEADER include/spdk/vhost.h 00:04:32.387 TEST_HEADER include/spdk/vmd.h 00:04:32.387 TEST_HEADER include/spdk/xor.h 00:04:32.387 TEST_HEADER include/spdk/zipf.h 00:04:32.387 CXX test/cpp_headers/accel.o 00:04:32.387 LINK spdk_dd 00:04:32.387 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:32.387 LINK bdev_svc 00:04:32.387 LINK test_dma 00:04:32.387 CC app/vhost/vhost.o 00:04:32.387 CC app/fio/bdev/fio_plugin.o 00:04:32.387 CXX test/cpp_headers/accel_module.o 00:04:32.387 LINK spdk_nvme_identify 00:04:32.645 LINK interrupt_tgt 00:04:32.645 LINK spdk_nvme 00:04:32.645 LINK vhost 00:04:32.645 CXX test/cpp_headers/assert.o 00:04:32.645 LINK spdk_top 00:04:32.902 CC test/env/vtophys/vtophys.o 00:04:32.902 CC test/event/event_perf/event_perf.o 00:04:32.902 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:32.902 CC test/env/mem_callbacks/mem_callbacks.o 00:04:32.902 CXX test/cpp_headers/barrier.o 00:04:32.902 CC test/nvme/aer/aer.o 00:04:32.902 LINK vtophys 00:04:32.902 LINK event_perf 00:04:32.902 CC examples/thread/thread/thread_ex.o 00:04:33.160 LINK spdk_bdev 00:04:33.160 CXX test/cpp_headers/base64.o 00:04:33.160 LINK mem_callbacks 00:04:33.160 CC examples/sock/hello_world/hello_sock.o 00:04:33.160 CC examples/vmd/lsvmd/lsvmd.o 00:04:33.160 LINK aer 00:04:33.160 LINK nvme_fuzz 00:04:33.160 CC test/event/reactor/reactor.o 00:04:33.160 CXX test/cpp_headers/bdev.o 00:04:33.160 CC test/rpc_client/rpc_client_test.o 00:04:33.160 LINK thread 00:04:33.160 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:33.418 LINK lsvmd 00:04:33.418 CC examples/idxd/perf/perf.o 00:04:33.418 LINK hello_sock 00:04:33.418 LINK reactor 00:04:33.418 CXX test/cpp_headers/bdev_module.o 00:04:33.418 CC test/nvme/reset/reset.o 00:04:33.418 LINK env_dpdk_post_init 00:04:33.418 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:33.418 LINK rpc_client_test 00:04:33.676 CC examples/vmd/led/led.o 00:04:33.676 CC test/event/reactor_perf/reactor_perf.o 00:04:33.676 CC test/env/memory/memory_ut.o 00:04:33.676 CXX test/cpp_headers/bdev_zone.o 00:04:33.676 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:33.676 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:33.676 LINK idxd_perf 00:04:33.676 LINK reset 00:04:33.676 LINK led 00:04:33.676 CC test/nvme/sgl/sgl.o 00:04:33.676 LINK reactor_perf 00:04:33.934 CXX test/cpp_headers/bit_array.o 00:04:33.934 CC test/app/jsoncat/jsoncat.o 00:04:33.934 CC test/app/histogram_perf/histogram_perf.o 00:04:33.934 CXX test/cpp_headers/bit_pool.o 00:04:33.934 CC test/app/stub/stub.o 00:04:33.934 LINK sgl 00:04:33.934 LINK histogram_perf 00:04:33.934 LINK jsoncat 00:04:34.192 CC test/event/app_repeat/app_repeat.o 00:04:34.192 LINK vhost_fuzz 00:04:34.192 CC examples/nvme/hello_world/hello_world.o 00:04:34.192 CXX test/cpp_headers/blob_bdev.o 00:04:34.192 LINK stub 00:04:34.192 LINK app_repeat 00:04:34.192 CC test/env/pci/pci_ut.o 00:04:34.192 CC test/nvme/e2edp/nvme_dp.o 00:04:34.451 CC test/event/scheduler/scheduler.o 00:04:34.451 LINK hello_world 00:04:34.451 CXX test/cpp_headers/blobfs_bdev.o 00:04:34.451 LINK memory_ut 00:04:34.451 CC test/nvme/overhead/overhead.o 00:04:34.451 CC test/nvme/err_injection/err_injection.o 00:04:34.710 LINK scheduler 00:04:34.710 LINK nvme_dp 00:04:34.710 CC test/nvme/startup/startup.o 00:04:34.710 CXX test/cpp_headers/blobfs.o 00:04:34.710 CC examples/nvme/reconnect/reconnect.o 00:04:34.710 LINK pci_ut 00:04:34.710 LINK err_injection 00:04:34.710 CXX test/cpp_headers/blob.o 00:04:34.710 LINK overhead 00:04:34.710 LINK startup 00:04:34.970 CC test/accel/dif/dif.o 00:04:34.970 CXX test/cpp_headers/conf.o 00:04:34.970 CXX test/cpp_headers/config.o 00:04:34.970 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:34.970 LINK reconnect 00:04:34.970 CC test/nvme/reserve/reserve.o 00:04:34.970 CC test/blobfs/mkfs/mkfs.o 00:04:34.970 CXX test/cpp_headers/cpuset.o 00:04:35.229 CC test/lvol/esnap/esnap.o 00:04:35.229 CC examples/accel/perf/accel_perf.o 00:04:35.229 LINK iscsi_fuzz 00:04:35.229 CC examples/blob/hello_world/hello_blob.o 00:04:35.229 LINK reserve 00:04:35.229 CXX test/cpp_headers/crc16.o 00:04:35.229 LINK dif 00:04:35.229 LINK mkfs 00:04:35.229 CC examples/nvme/arbitration/arbitration.o 00:04:35.488 LINK hello_blob 00:04:35.488 CXX test/cpp_headers/crc32.o 00:04:35.488 LINK nvme_manage 00:04:35.488 CC examples/blob/cli/blobcli.o 00:04:35.488 CC test/nvme/simple_copy/simple_copy.o 00:04:35.488 LINK accel_perf 00:04:35.488 CC examples/nvme/hotplug/hotplug.o 00:04:35.746 CC test/nvme/connect_stress/connect_stress.o 00:04:35.746 CXX test/cpp_headers/crc64.o 00:04:35.746 LINK arbitration 00:04:35.746 CXX test/cpp_headers/dif.o 00:04:35.746 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:35.746 LINK simple_copy 00:04:35.746 LINK connect_stress 00:04:35.746 CXX test/cpp_headers/dma.o 00:04:35.746 LINK hotplug 00:04:36.006 CC examples/nvme/abort/abort.o 00:04:36.006 LINK cmb_copy 00:04:36.006 CC test/bdev/bdevio/bdevio.o 00:04:36.006 LINK blobcli 00:04:36.006 CC examples/bdev/hello_world/hello_bdev.o 00:04:36.006 CXX test/cpp_headers/endian.o 00:04:36.006 CC examples/bdev/bdevperf/bdevperf.o 00:04:36.006 CC test/nvme/boot_partition/boot_partition.o 00:04:36.006 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:36.265 CC test/nvme/compliance/nvme_compliance.o 00:04:36.265 CXX test/cpp_headers/env_dpdk.o 00:04:36.265 CXX test/cpp_headers/env.o 00:04:36.265 LINK hello_bdev 00:04:36.265 LINK abort 00:04:36.265 LINK boot_partition 00:04:36.265 LINK pmr_persistence 00:04:36.265 CXX test/cpp_headers/event.o 00:04:36.265 LINK bdevio 00:04:36.523 CXX test/cpp_headers/fd_group.o 00:04:36.523 CXX test/cpp_headers/fd.o 00:04:36.523 LINK nvme_compliance 00:04:36.523 CXX test/cpp_headers/file.o 00:04:36.523 CC test/nvme/fused_ordering/fused_ordering.o 00:04:36.523 CXX test/cpp_headers/ftl.o 00:04:36.523 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:36.523 CXX test/cpp_headers/gpt_spec.o 00:04:36.523 CC test/nvme/fdp/fdp.o 00:04:36.523 CXX test/cpp_headers/hexlify.o 00:04:36.782 CXX test/cpp_headers/histogram_data.o 00:04:36.782 CC test/nvme/cuse/cuse.o 00:04:36.782 CXX test/cpp_headers/idxd.o 00:04:36.782 LINK fused_ordering 00:04:36.782 CXX test/cpp_headers/idxd_spec.o 00:04:36.782 CXX test/cpp_headers/init.o 00:04:36.782 LINK doorbell_aers 00:04:36.782 LINK bdevperf 00:04:36.782 CXX test/cpp_headers/ioat.o 00:04:37.040 LINK fdp 00:04:37.041 CXX test/cpp_headers/ioat_spec.o 00:04:37.041 CXX test/cpp_headers/iscsi_spec.o 00:04:37.041 CXX test/cpp_headers/json.o 00:04:37.041 CXX test/cpp_headers/jsonrpc.o 00:04:37.041 CXX test/cpp_headers/keyring.o 00:04:37.041 CXX test/cpp_headers/keyring_module.o 00:04:37.041 CXX test/cpp_headers/likely.o 00:04:37.041 CXX test/cpp_headers/log.o 00:04:37.041 CXX test/cpp_headers/lvol.o 00:04:37.041 CXX test/cpp_headers/memory.o 00:04:37.041 CXX test/cpp_headers/mmio.o 00:04:37.041 CXX test/cpp_headers/nbd.o 00:04:37.041 CXX test/cpp_headers/net.o 00:04:37.299 CXX test/cpp_headers/notify.o 00:04:37.299 CXX test/cpp_headers/nvme.o 00:04:37.299 CXX test/cpp_headers/nvme_intel.o 00:04:37.299 CXX test/cpp_headers/nvme_ocssd.o 00:04:37.299 CC examples/nvmf/nvmf/nvmf.o 00:04:37.299 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:37.299 CXX test/cpp_headers/nvme_spec.o 00:04:37.299 CXX test/cpp_headers/nvme_zns.o 00:04:37.299 CXX test/cpp_headers/nvmf_cmd.o 00:04:37.299 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:37.299 CXX test/cpp_headers/nvmf.o 00:04:37.557 CXX test/cpp_headers/nvmf_spec.o 00:04:37.557 CXX test/cpp_headers/nvmf_transport.o 00:04:37.557 CXX test/cpp_headers/opal.o 00:04:37.557 CXX test/cpp_headers/opal_spec.o 00:04:37.557 CXX test/cpp_headers/pci_ids.o 00:04:37.557 LINK nvmf 00:04:37.557 CXX test/cpp_headers/pipe.o 00:04:37.557 CXX test/cpp_headers/queue.o 00:04:37.557 CXX test/cpp_headers/reduce.o 00:04:37.558 CXX test/cpp_headers/rpc.o 00:04:37.558 CXX test/cpp_headers/scheduler.o 00:04:37.558 CXX test/cpp_headers/scsi.o 00:04:37.558 CXX test/cpp_headers/scsi_spec.o 00:04:37.816 CXX test/cpp_headers/sock.o 00:04:37.816 CXX test/cpp_headers/stdinc.o 00:04:37.816 CXX test/cpp_headers/string.o 00:04:37.816 CXX test/cpp_headers/thread.o 00:04:37.816 CXX test/cpp_headers/trace.o 00:04:37.816 CXX test/cpp_headers/trace_parser.o 00:04:37.816 CXX test/cpp_headers/tree.o 00:04:37.816 CXX test/cpp_headers/ublk.o 00:04:37.816 CXX test/cpp_headers/util.o 00:04:37.816 CXX test/cpp_headers/uuid.o 00:04:37.816 CXX test/cpp_headers/version.o 00:04:37.816 CXX test/cpp_headers/vfio_user_pci.o 00:04:37.816 CXX test/cpp_headers/vfio_user_spec.o 00:04:38.076 CXX test/cpp_headers/vhost.o 00:04:38.076 CXX test/cpp_headers/vmd.o 00:04:38.076 LINK cuse 00:04:38.076 CXX test/cpp_headers/xor.o 00:04:38.076 CXX test/cpp_headers/zipf.o 00:04:39.979 LINK esnap 00:04:39.979 00:04:39.979 real 0m52.956s 00:04:39.979 user 4m52.335s 00:04:39.979 sys 1m7.384s 00:04:39.979 04:55:40 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:39.979 04:55:40 make -- common/autotest_common.sh@10 -- $ set +x 00:04:39.979 ************************************ 00:04:39.979 END TEST make 00:04:39.979 ************************************ 00:04:40.284 04:55:40 -- common/autotest_common.sh@1142 -- $ return 0 00:04:40.284 04:55:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:40.284 04:55:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:40.284 04:55:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:40.284 04:55:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.284 04:55:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:40.284 04:55:40 -- pm/common@44 -- $ pid=5889 00:04:40.284 04:55:40 -- pm/common@50 -- $ kill -TERM 5889 00:04:40.284 04:55:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.284 04:55:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:40.284 04:55:40 -- pm/common@44 -- $ pid=5890 00:04:40.284 04:55:40 -- pm/common@50 -- $ kill -TERM 5890 00:04:40.284 04:55:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:40.284 04:55:40 -- nvmf/common.sh@7 -- # uname -s 00:04:40.284 04:55:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.284 04:55:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.284 04:55:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.284 04:55:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.284 04:55:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.284 04:55:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.284 04:55:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.284 04:55:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.284 04:55:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.284 04:55:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.284 04:55:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b62462d-2eeb-436d-9516-51c2e436d86a 00:04:40.284 04:55:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b62462d-2eeb-436d-9516-51c2e436d86a 00:04:40.284 04:55:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.284 04:55:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.284 04:55:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.284 04:55:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.284 04:55:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.284 04:55:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.284 04:55:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.284 04:55:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.284 04:55:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.284 04:55:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.284 04:55:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.284 04:55:40 -- paths/export.sh@5 -- # export PATH 00:04:40.284 04:55:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.284 04:55:40 -- nvmf/common.sh@47 -- # : 0 00:04:40.284 04:55:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:40.285 04:55:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:40.285 04:55:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.285 04:55:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.285 04:55:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.285 04:55:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:40.285 04:55:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:40.285 04:55:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:40.285 04:55:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:40.285 04:55:40 -- spdk/autotest.sh@32 -- # uname -s 00:04:40.285 04:55:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:40.285 04:55:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:40.285 04:55:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:40.285 04:55:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:40.285 04:55:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:40.285 04:55:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:40.285 04:55:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:40.285 04:55:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:40.285 04:55:40 -- spdk/autotest.sh@48 -- # udevadm_pid=64823 00:04:40.285 04:55:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:40.285 04:55:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:40.285 04:55:40 -- pm/common@17 -- # local monitor 00:04:40.285 04:55:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.285 04:55:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.285 04:55:40 -- pm/common@25 -- # sleep 1 00:04:40.285 04:55:40 -- pm/common@21 -- # date +%s 00:04:40.285 04:55:40 -- pm/common@21 -- # date +%s 00:04:40.285 04:55:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721710540 00:04:40.285 04:55:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721710540 00:04:40.285 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721710540_collect-vmstat.pm.log 00:04:40.285 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721710540_collect-cpu-load.pm.log 00:04:41.221 04:55:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:41.221 04:55:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:41.221 04:55:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.221 04:55:41 -- common/autotest_common.sh@10 -- # set +x 00:04:41.221 04:55:41 -- spdk/autotest.sh@59 -- # create_test_list 00:04:41.221 04:55:41 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:41.221 04:55:41 -- common/autotest_common.sh@10 -- # set +x 00:04:41.480 04:55:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:41.480 04:55:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:41.480 04:55:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:41.480 04:55:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:41.480 04:55:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:41.480 04:55:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:41.480 04:55:41 -- common/autotest_common.sh@1455 -- # uname 00:04:41.480 04:55:41 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:41.480 04:55:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:41.480 04:55:41 -- common/autotest_common.sh@1475 -- # uname 00:04:41.480 04:55:41 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:41.480 04:55:41 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:41.480 04:55:41 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:41.480 04:55:41 -- spdk/autotest.sh@72 -- # hash lcov 00:04:41.480 04:55:41 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:41.480 04:55:41 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:41.480 --rc lcov_branch_coverage=1 00:04:41.480 --rc lcov_function_coverage=1 00:04:41.480 --rc genhtml_branch_coverage=1 00:04:41.480 --rc genhtml_function_coverage=1 00:04:41.480 --rc genhtml_legend=1 00:04:41.480 --rc geninfo_all_blocks=1 00:04:41.480 ' 00:04:41.480 04:55:41 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:41.480 --rc lcov_branch_coverage=1 00:04:41.480 --rc lcov_function_coverage=1 00:04:41.480 --rc genhtml_branch_coverage=1 00:04:41.481 --rc genhtml_function_coverage=1 00:04:41.481 --rc genhtml_legend=1 00:04:41.481 --rc geninfo_all_blocks=1 00:04:41.481 ' 00:04:41.481 04:55:41 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:41.481 --rc lcov_branch_coverage=1 00:04:41.481 --rc lcov_function_coverage=1 00:04:41.481 --rc genhtml_branch_coverage=1 00:04:41.481 --rc genhtml_function_coverage=1 00:04:41.481 --rc genhtml_legend=1 00:04:41.481 --rc geninfo_all_blocks=1 00:04:41.481 --no-external' 00:04:41.481 04:55:41 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:41.481 --rc lcov_branch_coverage=1 00:04:41.481 --rc lcov_function_coverage=1 00:04:41.481 --rc genhtml_branch_coverage=1 00:04:41.481 --rc genhtml_function_coverage=1 00:04:41.481 --rc genhtml_legend=1 00:04:41.481 --rc geninfo_all_blocks=1 00:04:41.481 --no-external' 00:04:41.481 04:55:41 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:41.481 lcov: LCOV version 1.14 00:04:41.481 04:55:41 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:56.389 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:56.389 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:06.372 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:06.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:06.373 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:06.373 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:06.632 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:06.632 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:09.918 04:56:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:09.918 04:56:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.918 04:56:09 -- common/autotest_common.sh@10 -- # set +x 00:05:09.918 04:56:09 -- spdk/autotest.sh@91 -- # rm -f 00:05:09.918 04:56:09 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.176 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:10.176 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:10.176 04:56:10 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:10.176 04:56:10 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:10.177 04:56:10 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:10.177 04:56:10 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:10.177 04:56:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.177 04:56:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:10.177 04:56:10 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:10.177 04:56:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.177 04:56:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.177 04:56:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.177 04:56:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:10.177 04:56:10 -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:10.177 04:56:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:10.177 04:56:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.177 04:56:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.177 04:56:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:10.177 04:56:10 -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:10.177 04:56:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:10.177 04:56:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.177 04:56:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:10.177 04:56:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:10.177 04:56:10 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:10.177 04:56:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:10.177 04:56:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:10.177 04:56:10 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:10.177 04:56:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.177 04:56:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:10.177 04:56:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:10.177 04:56:10 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:10.177 04:56:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:10.177 No valid GPT data, bailing 00:05:10.177 04:56:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:10.177 04:56:10 -- scripts/common.sh@391 -- # pt= 00:05:10.177 04:56:10 -- scripts/common.sh@392 -- # return 1 00:05:10.177 04:56:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:10.177 1+0 records in 00:05:10.177 1+0 records out 00:05:10.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439028 s, 239 MB/s 00:05:10.177 04:56:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.177 04:56:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:10.177 04:56:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n2 00:05:10.177 04:56:10 -- scripts/common.sh@378 -- # local block=/dev/nvme0n2 pt 00:05:10.177 04:56:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:05:10.436 No valid GPT data, bailing 00:05:10.436 04:56:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:10.436 04:56:10 -- scripts/common.sh@391 -- # pt= 00:05:10.436 04:56:10 -- scripts/common.sh@392 -- # return 1 00:05:10.436 04:56:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:05:10.436 1+0 records in 00:05:10.436 1+0 records out 00:05:10.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047725 s, 220 MB/s 00:05:10.436 04:56:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.436 04:56:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:10.436 04:56:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n3 00:05:10.436 04:56:10 -- scripts/common.sh@378 -- # local block=/dev/nvme0n3 pt 00:05:10.436 04:56:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:05:10.436 No valid GPT data, bailing 00:05:10.436 04:56:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:10.436 04:56:10 -- scripts/common.sh@391 -- # pt= 00:05:10.436 04:56:10 -- scripts/common.sh@392 -- # return 1 00:05:10.436 04:56:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:05:10.436 1+0 records in 00:05:10.436 1+0 records out 00:05:10.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440317 s, 238 MB/s 00:05:10.436 04:56:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:10.436 04:56:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:10.436 04:56:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:10.436 04:56:10 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:10.436 04:56:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:10.436 No valid GPT data, bailing 00:05:10.436 04:56:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:10.436 04:56:10 -- scripts/common.sh@391 -- # pt= 00:05:10.436 04:56:10 -- scripts/common.sh@392 -- # return 1 00:05:10.436 04:56:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:10.436 1+0 records in 00:05:10.436 1+0 records out 00:05:10.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497601 s, 211 MB/s 00:05:10.436 04:56:10 -- spdk/autotest.sh@118 -- # sync 00:05:10.694 04:56:10 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:10.694 04:56:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:10.694 04:56:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:12.598 04:56:12 -- spdk/autotest.sh@124 -- # uname -s 00:05:12.598 04:56:12 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:12.598 04:56:12 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:12.598 04:56:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.598 04:56:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.598 04:56:12 -- common/autotest_common.sh@10 -- # set +x 00:05:12.598 ************************************ 00:05:12.598 START TEST setup.sh 00:05:12.598 ************************************ 00:05:12.598 04:56:12 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:12.598 * Looking for test storage... 00:05:12.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:12.598 04:56:12 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:12.598 04:56:12 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:12.598 04:56:12 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:12.598 04:56:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.598 04:56:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.598 04:56:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:12.598 ************************************ 00:05:12.598 START TEST acl 00:05:12.598 ************************************ 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:12.598 * Looking for test storage... 00:05:12.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:12.598 04:56:12 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:12.598 04:56:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:12.599 04:56:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:12.599 04:56:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:12.599 04:56:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:12.599 04:56:12 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:12.599 04:56:12 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:12.599 04:56:12 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:12.599 04:56:12 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:12.599 04:56:12 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:12.599 04:56:12 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.599 04:56:12 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.541 04:56:13 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:13.541 04:56:13 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:13.541 04:56:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:13.541 04:56:13 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:13.541 04:56:13 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.541 04:56:13 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.107 Hugepages 00:05:14.107 node hugesize free / total 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.107 00:05:14.107 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:14.107 04:56:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.366 04:56:14 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:14.366 04:56:14 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:14.366 04:56:14 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:14.366 04:56:14 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:14.366 04:56:14 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:14.366 04:56:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.366 04:56:14 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:14.366 04:56:14 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:14.366 04:56:14 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.366 04:56:14 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.366 04:56:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:14.366 ************************************ 00:05:14.366 START TEST denied 00:05:14.366 ************************************ 00:05:14.366 04:56:14 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:14.366 04:56:14 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:14.366 04:56:14 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:14.366 04:56:14 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:14.366 04:56:14 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.366 04:56:14 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.301 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:15.301 04:56:15 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:15.302 04:56:15 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:15.302 04:56:15 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:15.302 04:56:15 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:15.302 04:56:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:15.302 04:56:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:15.302 04:56:15 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:15.302 04:56:15 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:15.302 04:56:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:15.302 04:56:15 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.560 00:05:15.560 real 0m1.397s 00:05:15.560 user 0m0.542s 00:05:15.560 sys 0m0.794s 00:05:15.560 04:56:15 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.560 04:56:15 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:15.560 ************************************ 00:05:15.560 END TEST denied 00:05:15.560 ************************************ 00:05:15.819 04:56:15 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:15.819 04:56:15 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:15.819 04:56:15 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.819 04:56:15 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.819 04:56:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:15.819 ************************************ 00:05:15.819 START TEST allowed 00:05:15.819 ************************************ 00:05:15.819 04:56:15 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:15.819 04:56:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:15.819 04:56:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:15.819 04:56:15 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:15.819 04:56:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.819 04:56:15 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.385 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.385 04:56:16 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:16.385 04:56:16 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:16.385 04:56:16 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:16.385 04:56:16 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:16.385 04:56:16 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:16.385 04:56:16 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:16.385 04:56:16 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:16.385 04:56:16 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:16.385 04:56:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.385 04:56:16 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.324 ************************************ 00:05:17.324 END TEST allowed 00:05:17.324 ************************************ 00:05:17.324 00:05:17.324 real 0m1.478s 00:05:17.324 user 0m0.666s 00:05:17.324 sys 0m0.804s 00:05:17.324 04:56:17 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.324 04:56:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:17.324 04:56:17 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:17.324 ************************************ 00:05:17.324 END TEST acl 00:05:17.324 ************************************ 00:05:17.324 00:05:17.324 real 0m4.637s 00:05:17.324 user 0m1.986s 00:05:17.324 sys 0m2.593s 00:05:17.324 04:56:17 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.324 04:56:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:17.324 04:56:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:17.324 04:56:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:17.324 04:56:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.324 04:56:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.324 04:56:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.324 ************************************ 00:05:17.324 START TEST hugepages 00:05:17.324 ************************************ 00:05:17.324 04:56:17 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:17.324 * Looking for test storage... 00:05:17.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 4877692 kB' 'MemAvailable: 7387652 kB' 'Buffers: 2436 kB' 'Cached: 2714800 kB' 'SwapCached: 0 kB' 'Active: 436744 kB' 'Inactive: 2385844 kB' 'Active(anon): 115844 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 106932 kB' 'Mapped: 48796 kB' 'Shmem: 10492 kB' 'KReclaimable: 80312 kB' 'Slab: 157864 kB' 'SReclaimable: 80312 kB' 'SUnreclaim: 77552 kB' 'KernelStack: 6636 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 336556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.324 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.325 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:17.326 04:56:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:17.326 04:56:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.326 04:56:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.326 04:56:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:17.326 ************************************ 00:05:17.326 START TEST default_setup 00:05:17.326 ************************************ 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.326 04:56:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.267 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.267 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.267 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6975524 kB' 'MemAvailable: 9485436 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453452 kB' 'Inactive: 2385860 kB' 'Active(anon): 132552 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123764 kB' 'Mapped: 48864 kB' 'Shmem: 10468 kB' 'KReclaimable: 80180 kB' 'Slab: 157772 kB' 'SReclaimable: 80180 kB' 'SUnreclaim: 77592 kB' 'KernelStack: 6608 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.267 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.268 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6975524 kB' 'MemAvailable: 9485308 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453196 kB' 'Inactive: 2385860 kB' 'Active(anon): 132296 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123472 kB' 'Mapped: 48744 kB' 'Shmem: 10468 kB' 'KReclaimable: 79928 kB' 'Slab: 157496 kB' 'SReclaimable: 79928 kB' 'SUnreclaim: 77568 kB' 'KernelStack: 6592 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.269 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.270 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6975524 kB' 'MemAvailable: 9485308 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453252 kB' 'Inactive: 2385860 kB' 'Active(anon): 132352 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123472 kB' 'Mapped: 48744 kB' 'Shmem: 10468 kB' 'KReclaimable: 79928 kB' 'Slab: 157488 kB' 'SReclaimable: 79928 kB' 'SUnreclaim: 77560 kB' 'KernelStack: 6592 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.271 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.272 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.273 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:18.274 nr_hugepages=1024 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:18.274 resv_hugepages=0 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:18.274 surplus_hugepages=0 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:18.274 anon_hugepages=0 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6975524 kB' 'MemAvailable: 9485308 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453140 kB' 'Inactive: 2385860 kB' 'Active(anon): 132240 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123352 kB' 'Mapped: 48744 kB' 'Shmem: 10468 kB' 'KReclaimable: 79928 kB' 'Slab: 157488 kB' 'SReclaimable: 79928 kB' 'SUnreclaim: 77560 kB' 'KernelStack: 6576 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.274 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.275 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.276 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6975524 kB' 'MemUsed: 5266452 kB' 'SwapCached: 0 kB' 'Active: 453260 kB' 'Inactive: 2385860 kB' 'Active(anon): 132360 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 2717228 kB' 'Mapped: 48744 kB' 'AnonPages: 123472 kB' 'Shmem: 10468 kB' 'KernelStack: 6576 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79928 kB' 'Slab: 157488 kB' 'SReclaimable: 79928 kB' 'SUnreclaim: 77560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.277 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.536 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.537 node0=1024 expecting 1024 00:05:18.537 ************************************ 00:05:18.537 END TEST default_setup 00:05:18.537 ************************************ 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:18.537 00:05:18.537 real 0m1.000s 00:05:18.537 user 0m0.456s 00:05:18.537 sys 0m0.462s 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.537 04:56:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:18.537 04:56:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:18.537 04:56:18 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:18.537 04:56:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.537 04:56:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.537 04:56:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:18.537 ************************************ 00:05:18.537 START TEST per_node_1G_alloc 00:05:18.537 ************************************ 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:18.537 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.538 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.538 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:18.538 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:18.538 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:18.538 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:18.538 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:18.538 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:18.538 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:18.538 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.538 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.799 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.799 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8022708 kB' 'MemAvailable: 10532492 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453684 kB' 'Inactive: 2385860 kB' 'Active(anon): 132784 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123904 kB' 'Mapped: 48860 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157440 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77516 kB' 'KernelStack: 6616 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.799 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.800 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8022708 kB' 'MemAvailable: 10532492 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 452972 kB' 'Inactive: 2385860 kB' 'Active(anon): 132072 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123240 kB' 'Mapped: 48744 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157460 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77536 kB' 'KernelStack: 6592 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.801 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.802 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8022456 kB' 'MemAvailable: 10532240 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453284 kB' 'Inactive: 2385860 kB' 'Active(anon): 132384 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123508 kB' 'Mapped: 48744 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157460 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77536 kB' 'KernelStack: 6592 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.803 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.804 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:19.067 nr_hugepages=512 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.067 resv_hugepages=0 00:05:19.067 surplus_hugepages=0 00:05:19.067 anon_hugepages=0 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8025524 kB' 'MemAvailable: 10535308 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453336 kB' 'Inactive: 2385860 kB' 'Active(anon): 132436 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123332 kB' 'Mapped: 48744 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157460 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77536 kB' 'KernelStack: 6592 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.067 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8025904 kB' 'MemUsed: 4216072 kB' 'SwapCached: 0 kB' 'Active: 453272 kB' 'Inactive: 2385860 kB' 'Active(anon): 132372 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2717228 kB' 'Mapped: 48744 kB' 'AnonPages: 123488 kB' 'Shmem: 10468 kB' 'KernelStack: 6560 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79924 kB' 'Slab: 157460 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:19.070 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.071 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.071 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.071 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.071 node0=512 expecting 512 00:05:19.071 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:19.071 ************************************ 00:05:19.071 END TEST per_node_1G_alloc 00:05:19.071 ************************************ 00:05:19.071 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:19.071 00:05:19.071 real 0m0.551s 00:05:19.071 user 0m0.279s 00:05:19.071 sys 0m0.281s 00:05:19.071 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.071 04:56:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:19.071 04:56:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:19.071 04:56:19 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:19.071 04:56:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.071 04:56:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.071 04:56:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:19.071 ************************************ 00:05:19.071 START TEST even_2G_alloc 00:05:19.071 ************************************ 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.071 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.331 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.331 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6978528 kB' 'MemAvailable: 9488312 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453732 kB' 'Inactive: 2385860 kB' 'Active(anon): 132832 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123976 kB' 'Mapped: 48856 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157480 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77556 kB' 'KernelStack: 6696 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.331 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.332 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6978528 kB' 'MemAvailable: 9488312 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453500 kB' 'Inactive: 2385860 kB' 'Active(anon): 132600 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123484 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157468 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77544 kB' 'KernelStack: 6648 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:19.595 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.596 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.597 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6978280 kB' 'MemAvailable: 9488064 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453424 kB' 'Inactive: 2385860 kB' 'Active(anon): 132524 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123632 kB' 'Mapped: 48740 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157464 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77540 kB' 'KernelStack: 6576 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.598 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:19.599 nr_hugepages=1024 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:19.599 resv_hugepages=0 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.599 surplus_hugepages=0 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.599 anon_hugepages=0 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.599 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6978652 kB' 'MemAvailable: 9488436 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453464 kB' 'Inactive: 2385860 kB' 'Active(anon): 132564 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123728 kB' 'Mapped: 49000 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157464 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77540 kB' 'KernelStack: 6624 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.600 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.601 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6978700 kB' 'MemUsed: 5263276 kB' 'SwapCached: 0 kB' 'Active: 453012 kB' 'Inactive: 2385860 kB' 'Active(anon): 132112 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 2717228 kB' 'Mapped: 48740 kB' 'AnonPages: 123288 kB' 'Shmem: 10468 kB' 'KernelStack: 6592 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79924 kB' 'Slab: 157448 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.602 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.603 node0=1024 expecting 1024 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:19.603 00:05:19.603 real 0m0.530s 00:05:19.603 user 0m0.276s 00:05:19.603 sys 0m0.270s 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.603 04:56:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:19.603 ************************************ 00:05:19.603 END TEST even_2G_alloc 00:05:19.603 ************************************ 00:05:19.603 04:56:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:19.603 04:56:19 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:19.603 04:56:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.603 04:56:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.603 04:56:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:19.603 ************************************ 00:05:19.603 START TEST odd_alloc 00:05:19.603 ************************************ 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.603 04:56:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.129 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.129 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6978188 kB' 'MemAvailable: 9487972 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453780 kB' 'Inactive: 2385860 kB' 'Active(anon): 132880 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 124000 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157456 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77532 kB' 'KernelStack: 6596 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.129 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6978496 kB' 'MemAvailable: 9488280 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453320 kB' 'Inactive: 2385860 kB' 'Active(anon): 132420 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123532 kB' 'Mapped: 48740 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157452 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77528 kB' 'KernelStack: 6592 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.130 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.131 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6978748 kB' 'MemAvailable: 9488540 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453284 kB' 'Inactive: 2385860 kB' 'Active(anon): 132384 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123524 kB' 'Mapped: 48740 kB' 'Shmem: 10468 kB' 'KReclaimable: 79940 kB' 'Slab: 157464 kB' 'SReclaimable: 79940 kB' 'SUnreclaim: 77524 kB' 'KernelStack: 6592 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.132 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.133 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:20.134 nr_hugepages=1025 00:05:20.134 resv_hugepages=0 00:05:20.134 surplus_hugepages=0 00:05:20.134 anon_hugepages=0 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.134 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6979000 kB' 'MemAvailable: 9488792 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453272 kB' 'Inactive: 2385860 kB' 'Active(anon): 132372 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123524 kB' 'Mapped: 48740 kB' 'Shmem: 10468 kB' 'KReclaimable: 79940 kB' 'Slab: 157460 kB' 'SReclaimable: 79940 kB' 'SUnreclaim: 77520 kB' 'KernelStack: 6592 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.135 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6979252 kB' 'MemUsed: 5262724 kB' 'SwapCached: 0 kB' 'Active: 453372 kB' 'Inactive: 2385860 kB' 'Active(anon): 132472 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2717228 kB' 'Mapped: 48736 kB' 'AnonPages: 123644 kB' 'Shmem: 10468 kB' 'KernelStack: 6560 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79940 kB' 'Slab: 157448 kB' 'SReclaimable: 79940 kB' 'SUnreclaim: 77508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.136 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.137 04:56:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:20.138 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.138 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.138 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.138 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.138 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:20.138 node0=1025 expecting 1025 00:05:20.138 ************************************ 00:05:20.138 END TEST odd_alloc 00:05:20.138 ************************************ 00:05:20.138 04:56:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:20.138 00:05:20.138 real 0m0.546s 00:05:20.138 user 0m0.251s 00:05:20.138 sys 0m0.306s 00:05:20.138 04:56:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.138 04:56:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:20.138 04:56:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:20.138 04:56:20 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:20.138 04:56:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.138 04:56:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.138 04:56:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:20.138 ************************************ 00:05:20.138 START TEST custom_alloc 00:05:20.138 ************************************ 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:20.138 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:20.139 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:20.139 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:20.139 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:20.139 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:20.139 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.139 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.712 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.712 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8028984 kB' 'MemAvailable: 10538780 kB' 'Buffers: 2436 kB' 'Cached: 2714796 kB' 'SwapCached: 0 kB' 'Active: 453728 kB' 'Inactive: 2385864 kB' 'Active(anon): 132828 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123972 kB' 'Mapped: 48808 kB' 'Shmem: 10468 kB' 'KReclaimable: 79940 kB' 'Slab: 157452 kB' 'SReclaimable: 79940 kB' 'SUnreclaim: 77512 kB' 'KernelStack: 6612 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.712 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.713 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8029600 kB' 'MemAvailable: 10539396 kB' 'Buffers: 2436 kB' 'Cached: 2714796 kB' 'SwapCached: 0 kB' 'Active: 453208 kB' 'Inactive: 2385864 kB' 'Active(anon): 132308 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123444 kB' 'Mapped: 48740 kB' 'Shmem: 10468 kB' 'KReclaimable: 79940 kB' 'Slab: 157440 kB' 'SReclaimable: 79940 kB' 'SUnreclaim: 77500 kB' 'KernelStack: 6576 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.714 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.715 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8029600 kB' 'MemAvailable: 10539396 kB' 'Buffers: 2436 kB' 'Cached: 2714796 kB' 'SwapCached: 0 kB' 'Active: 453304 kB' 'Inactive: 2385864 kB' 'Active(anon): 132404 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123540 kB' 'Mapped: 48740 kB' 'Shmem: 10468 kB' 'KReclaimable: 79940 kB' 'Slab: 157440 kB' 'SReclaimable: 79940 kB' 'SUnreclaim: 77500 kB' 'KernelStack: 6592 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.716 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.717 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.718 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:20.719 nr_hugepages=512 00:05:20.719 resv_hugepages=0 00:05:20.719 surplus_hugepages=0 00:05:20.719 anon_hugepages=0 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8029940 kB' 'MemAvailable: 10539736 kB' 'Buffers: 2436 kB' 'Cached: 2714796 kB' 'SwapCached: 0 kB' 'Active: 453048 kB' 'Inactive: 2385864 kB' 'Active(anon): 132148 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123284 kB' 'Mapped: 48740 kB' 'Shmem: 10468 kB' 'KReclaimable: 79940 kB' 'Slab: 157440 kB' 'SReclaimable: 79940 kB' 'SUnreclaim: 77500 kB' 'KernelStack: 6592 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.719 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.720 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8029940 kB' 'MemUsed: 4212036 kB' 'SwapCached: 0 kB' 'Active: 453288 kB' 'Inactive: 2385864 kB' 'Active(anon): 132388 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 2717232 kB' 'Mapped: 48740 kB' 'AnonPages: 123516 kB' 'Shmem: 10468 kB' 'KernelStack: 6592 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79940 kB' 'Slab: 157440 kB' 'SReclaimable: 79940 kB' 'SUnreclaim: 77500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.721 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.722 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:20.723 node0=512 expecting 512 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:20.723 00:05:20.723 real 0m0.529s 00:05:20.723 user 0m0.252s 00:05:20.723 sys 0m0.288s 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.723 ************************************ 00:05:20.723 END TEST custom_alloc 00:05:20.723 ************************************ 00:05:20.723 04:56:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:20.723 04:56:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:20.723 04:56:20 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:20.723 04:56:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.723 04:56:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.723 04:56:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:20.723 ************************************ 00:05:20.723 START TEST no_shrink_alloc 00:05:20.723 ************************************ 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.723 04:56:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.297 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.297 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6985420 kB' 'MemAvailable: 9495216 kB' 'Buffers: 2436 kB' 'Cached: 2714796 kB' 'SwapCached: 0 kB' 'Active: 453484 kB' 'Inactive: 2385864 kB' 'Active(anon): 132584 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123956 kB' 'Mapped: 48776 kB' 'Shmem: 10468 kB' 'KReclaimable: 79940 kB' 'Slab: 157416 kB' 'SReclaimable: 79940 kB' 'SUnreclaim: 77476 kB' 'KernelStack: 6600 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.297 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.298 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6985532 kB' 'MemAvailable: 9495324 kB' 'Buffers: 2436 kB' 'Cached: 2714792 kB' 'SwapCached: 0 kB' 'Active: 453064 kB' 'Inactive: 2385860 kB' 'Active(anon): 132164 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123320 kB' 'Mapped: 48776 kB' 'Shmem: 10468 kB' 'KReclaimable: 79940 kB' 'Slab: 157388 kB' 'SReclaimable: 79940 kB' 'SUnreclaim: 77448 kB' 'KernelStack: 6504 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.299 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6985800 kB' 'MemAvailable: 9495596 kB' 'Buffers: 2436 kB' 'Cached: 2714796 kB' 'SwapCached: 0 kB' 'Active: 448660 kB' 'Inactive: 2385864 kB' 'Active(anon): 127760 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 119192 kB' 'Mapped: 48160 kB' 'Shmem: 10468 kB' 'KReclaimable: 79940 kB' 'Slab: 157400 kB' 'SReclaimable: 79940 kB' 'SUnreclaim: 77460 kB' 'KernelStack: 6544 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:21.300 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.301 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.302 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.303 nr_hugepages=1024 00:05:21.303 resv_hugepages=0 00:05:21.303 surplus_hugepages=0 00:05:21.303 anon_hugepages=0 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6992204 kB' 'MemAvailable: 9501992 kB' 'Buffers: 2436 kB' 'Cached: 2714796 kB' 'SwapCached: 0 kB' 'Active: 447848 kB' 'Inactive: 2385864 kB' 'Active(anon): 126948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 118344 kB' 'Mapped: 47996 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157252 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77328 kB' 'KernelStack: 6464 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.303 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.304 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6992204 kB' 'MemUsed: 5249772 kB' 'SwapCached: 0 kB' 'Active: 447884 kB' 'Inactive: 2385864 kB' 'Active(anon): 126984 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2717232 kB' 'Mapped: 47996 kB' 'AnonPages: 118340 kB' 'Shmem: 10468 kB' 'KernelStack: 6464 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79924 kB' 'Slab: 157232 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.305 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.306 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.307 node0=1024 expecting 1024 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.307 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.833 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.833 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.833 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6989260 kB' 'MemAvailable: 9499048 kB' 'Buffers: 2436 kB' 'Cached: 2714796 kB' 'SwapCached: 0 kB' 'Active: 448544 kB' 'Inactive: 2385864 kB' 'Active(anon): 127644 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118784 kB' 'Mapped: 48352 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157172 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77248 kB' 'KernelStack: 6504 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.833 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.834 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6989668 kB' 'MemAvailable: 9499456 kB' 'Buffers: 2436 kB' 'Cached: 2714796 kB' 'SwapCached: 0 kB' 'Active: 447912 kB' 'Inactive: 2385864 kB' 'Active(anon): 127012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118184 kB' 'Mapped: 48000 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157184 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77260 kB' 'KernelStack: 6480 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.835 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.836 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6989668 kB' 'MemAvailable: 9499456 kB' 'Buffers: 2436 kB' 'Cached: 2714796 kB' 'SwapCached: 0 kB' 'Active: 448164 kB' 'Inactive: 2385864 kB' 'Active(anon): 127264 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118436 kB' 'Mapped: 48000 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157184 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77260 kB' 'KernelStack: 6480 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.837 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.838 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.839 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:21.840 nr_hugepages=1024 00:05:21.840 resv_hugepages=0 00:05:21.840 surplus_hugepages=0 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.840 anon_hugepages=0 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6989668 kB' 'MemAvailable: 9499456 kB' 'Buffers: 2436 kB' 'Cached: 2714796 kB' 'SwapCached: 0 kB' 'Active: 448116 kB' 'Inactive: 2385864 kB' 'Active(anon): 127216 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118348 kB' 'Mapped: 48000 kB' 'Shmem: 10468 kB' 'KReclaimable: 79924 kB' 'Slab: 157184 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77260 kB' 'KernelStack: 6464 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.840 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.841 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.842 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6989668 kB' 'MemUsed: 5252308 kB' 'SwapCached: 0 kB' 'Active: 447848 kB' 'Inactive: 2385864 kB' 'Active(anon): 126948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2385864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2717232 kB' 'Mapped: 48000 kB' 'AnonPages: 118336 kB' 'Shmem: 10468 kB' 'KernelStack: 6464 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79924 kB' 'Slab: 157184 kB' 'SReclaimable: 79924 kB' 'SUnreclaim: 77260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.843 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.844 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:21.845 node0=1024 expecting 1024 00:05:21.845 04:56:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:21.845 ************************************ 00:05:21.845 END TEST no_shrink_alloc 00:05:21.845 ************************************ 00:05:21.845 00:05:21.845 real 0m1.070s 00:05:21.845 user 0m0.550s 00:05:21.845 sys 0m0.534s 00:05:21.845 04:56:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.845 04:56:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:21.845 04:56:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:21.845 04:56:22 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:21.845 04:56:22 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:21.845 04:56:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:21.845 04:56:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.845 04:56:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:21.845 04:56:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:21.845 04:56:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:21.845 04:56:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:21.845 04:56:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:21.845 00:05:21.845 real 0m4.682s 00:05:21.845 user 0m2.232s 00:05:21.845 sys 0m2.393s 00:05:21.845 04:56:22 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.845 ************************************ 00:05:21.845 END TEST hugepages 00:05:21.845 ************************************ 00:05:21.845 04:56:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:22.134 04:56:22 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:22.134 04:56:22 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:22.134 04:56:22 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.134 04:56:22 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.134 04:56:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:22.134 ************************************ 00:05:22.134 START TEST driver 00:05:22.134 ************************************ 00:05:22.135 04:56:22 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:22.135 * Looking for test storage... 00:05:22.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:22.135 04:56:22 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:22.135 04:56:22 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.135 04:56:22 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.705 04:56:22 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:22.705 04:56:22 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.705 04:56:22 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.705 04:56:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:22.705 ************************************ 00:05:22.705 START TEST guess_driver 00:05:22.705 ************************************ 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:22.705 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:22.705 Looking for driver=uio_pci_generic 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.705 04:56:22 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.284 04:56:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:23.285 04:56:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:23.285 04:56:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.285 04:56:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.285 04:56:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:23.285 04:56:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.543 04:56:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.543 04:56:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:23.543 04:56:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.543 04:56:23 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:23.543 04:56:23 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:23.543 04:56:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.543 04:56:23 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.111 00:05:24.111 real 0m1.401s 00:05:24.111 user 0m0.492s 00:05:24.111 sys 0m0.878s 00:05:24.111 04:56:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.111 ************************************ 00:05:24.111 END TEST guess_driver 00:05:24.111 ************************************ 00:05:24.111 04:56:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:24.111 04:56:24 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:24.111 ************************************ 00:05:24.111 END TEST driver 00:05:24.111 ************************************ 00:05:24.111 00:05:24.111 real 0m2.082s 00:05:24.111 user 0m0.746s 00:05:24.111 sys 0m1.355s 00:05:24.111 04:56:24 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.111 04:56:24 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:24.111 04:56:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:24.111 04:56:24 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:24.111 04:56:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.111 04:56:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.111 04:56:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:24.111 ************************************ 00:05:24.111 START TEST devices 00:05:24.111 ************************************ 00:05:24.111 04:56:24 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:24.111 * Looking for test storage... 00:05:24.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:24.111 04:56:24 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:24.111 04:56:24 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:24.111 04:56:24 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:24.111 04:56:24 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:25.048 04:56:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:25.048 No valid GPT data, bailing 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:25.048 04:56:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:25.048 04:56:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:25.048 04:56:25 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:25.048 No valid GPT data, bailing 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:25.048 04:56:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:25.048 04:56:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:25.048 04:56:25 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:25.048 No valid GPT data, bailing 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:25.048 04:56:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:25.048 04:56:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:25.048 04:56:25 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:25.048 04:56:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:25.048 04:56:25 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:25.307 No valid GPT data, bailing 00:05:25.307 04:56:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:25.307 04:56:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:25.307 04:56:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:25.307 04:56:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:25.307 04:56:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:25.307 04:56:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:25.307 04:56:25 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:25.307 04:56:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:25.307 04:56:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.307 04:56:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:25.307 04:56:25 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:25.307 04:56:25 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:25.308 04:56:25 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:25.308 04:56:25 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.308 04:56:25 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.308 04:56:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:25.308 ************************************ 00:05:25.308 START TEST nvme_mount 00:05:25.308 ************************************ 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:25.308 04:56:25 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:26.244 Creating new GPT entries in memory. 00:05:26.244 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:26.244 other utilities. 00:05:26.244 04:56:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:26.244 04:56:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.244 04:56:26 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:26.244 04:56:26 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:26.244 04:56:26 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:27.180 Creating new GPT entries in memory. 00:05:27.180 The operation has completed successfully. 00:05:27.180 04:56:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:27.180 04:56:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.180 04:56:27 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 69007 00:05:27.180 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.180 04:56:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:27.180 04:56:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.180 04:56:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:27.180 04:56:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:27.439 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:27.699 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:27.699 04:56:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:27.958 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:27.958 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:27.958 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:27.958 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:27.958 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.217 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.476 04:56:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.735 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.735 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:28.735 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:28.735 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.735 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.735 04:56:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.994 04:56:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.994 04:56:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.994 04:56:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.994 04:56:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.994 04:56:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.994 04:56:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:28.994 04:56:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:28.994 04:56:29 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:28.994 04:56:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.994 04:56:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.994 04:56:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.994 04:56:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:29.253 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:29.253 00:05:29.253 real 0m3.900s 00:05:29.253 user 0m0.682s 00:05:29.253 sys 0m0.960s 00:05:29.253 04:56:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.253 04:56:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:29.253 ************************************ 00:05:29.253 END TEST nvme_mount 00:05:29.253 ************************************ 00:05:29.253 04:56:29 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:29.253 04:56:29 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:29.253 04:56:29 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.253 04:56:29 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.253 04:56:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:29.253 ************************************ 00:05:29.253 START TEST dm_mount 00:05:29.253 ************************************ 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:29.253 04:56:29 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:30.190 Creating new GPT entries in memory. 00:05:30.190 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:30.190 other utilities. 00:05:30.190 04:56:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:30.190 04:56:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.190 04:56:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:30.190 04:56:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:30.190 04:56:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:31.126 Creating new GPT entries in memory. 00:05:31.126 The operation has completed successfully. 00:05:31.126 04:56:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:31.126 04:56:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.126 04:56:31 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:31.127 04:56:31 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:31.127 04:56:31 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:32.501 The operation has completed successfully. 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 69443 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:32.501 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.759 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:32.759 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.759 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:32.759 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.759 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.759 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:32.759 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.760 04:56:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.018 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:33.018 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:33.018 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:33.018 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.018 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:33.018 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:33.275 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:33.275 00:05:33.275 real 0m4.155s 00:05:33.275 user 0m0.459s 00:05:33.275 sys 0m0.676s 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.275 04:56:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:33.275 ************************************ 00:05:33.275 END TEST dm_mount 00:05:33.275 ************************************ 00:05:33.275 04:56:33 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:33.275 04:56:33 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:33.275 04:56:33 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:33.275 04:56:33 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.275 04:56:33 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.275 04:56:33 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:33.275 04:56:33 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.275 04:56:33 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:33.533 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:33.533 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:33.533 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:33.533 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:33.533 04:56:33 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:33.533 04:56:33 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.533 04:56:33 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.533 04:56:33 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.533 04:56:33 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.533 04:56:33 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.533 04:56:33 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:33.792 00:05:33.792 real 0m9.532s 00:05:33.792 user 0m1.788s 00:05:33.792 sys 0m2.179s 00:05:33.792 ************************************ 00:05:33.792 END TEST devices 00:05:33.792 ************************************ 00:05:33.792 04:56:33 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.792 04:56:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:33.792 04:56:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:33.792 00:05:33.792 real 0m21.221s 00:05:33.792 user 0m6.844s 00:05:33.792 sys 0m8.704s 00:05:33.792 04:56:33 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.792 04:56:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:33.792 ************************************ 00:05:33.792 END TEST setup.sh 00:05:33.792 ************************************ 00:05:33.792 04:56:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:33.792 04:56:33 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:34.359 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.359 Hugepages 00:05:34.359 node hugesize free / total 00:05:34.359 node0 1048576kB 0 / 0 00:05:34.359 node0 2048kB 2048 / 2048 00:05:34.359 00:05:34.359 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:34.359 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:34.617 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:34.617 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:34.617 04:56:34 -- spdk/autotest.sh@130 -- # uname -s 00:05:34.617 04:56:34 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:34.617 04:56:34 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:34.617 04:56:34 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.231 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.231 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.490 04:56:35 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:36.451 04:56:36 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:36.451 04:56:36 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:36.451 04:56:36 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:36.451 04:56:36 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:36.451 04:56:36 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:36.451 04:56:36 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:36.451 04:56:36 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.451 04:56:36 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.451 04:56:36 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:36.452 04:56:36 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:36.452 04:56:36 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:36.452 04:56:36 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.741 Waiting for block devices as requested 00:05:36.999 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.999 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.999 04:56:37 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:36.999 04:56:37 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:36.999 04:56:37 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:36.999 04:56:37 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:36.999 04:56:37 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:36.999 04:56:37 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:36.999 04:56:37 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:36.999 04:56:37 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:36.999 04:56:37 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:36.999 04:56:37 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:36.999 04:56:37 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:36.999 04:56:37 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:36.999 04:56:37 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:36.999 04:56:37 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:36.999 04:56:37 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:36.999 04:56:37 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:36.999 04:56:37 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:36.999 04:56:37 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:36.999 04:56:37 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:36.999 04:56:37 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:36.999 04:56:37 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:36.999 04:56:37 -- common/autotest_common.sh@1557 -- # continue 00:05:36.999 04:56:37 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:36.999 04:56:37 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:36.999 04:56:37 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:36.999 04:56:37 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:36.999 04:56:37 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:36.999 04:56:37 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:36.999 04:56:37 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:36.999 04:56:37 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:36.999 04:56:37 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:36.999 04:56:37 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:36.999 04:56:37 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:36.999 04:56:37 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:36.999 04:56:37 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:36.999 04:56:37 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:36.999 04:56:37 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:36.999 04:56:37 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:36.999 04:56:37 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:36.999 04:56:37 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:36.999 04:56:37 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:36.999 04:56:37 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:36.999 04:56:37 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:36.999 04:56:37 -- common/autotest_common.sh@1557 -- # continue 00:05:36.999 04:56:37 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:36.999 04:56:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:36.999 04:56:37 -- common/autotest_common.sh@10 -- # set +x 00:05:36.999 04:56:37 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:36.999 04:56:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:36.999 04:56:37 -- common/autotest_common.sh@10 -- # set +x 00:05:36.999 04:56:37 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.934 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.934 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.934 04:56:38 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:37.934 04:56:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:37.934 04:56:38 -- common/autotest_common.sh@10 -- # set +x 00:05:37.934 04:56:38 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:37.934 04:56:38 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:37.934 04:56:38 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:37.934 04:56:38 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:37.934 04:56:38 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:37.934 04:56:38 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:37.934 04:56:38 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:37.934 04:56:38 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:37.935 04:56:38 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:37.935 04:56:38 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:37.935 04:56:38 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:37.935 04:56:38 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:37.935 04:56:38 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:37.935 04:56:38 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:37.935 04:56:38 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:37.935 04:56:38 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:37.935 04:56:38 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:37.935 04:56:38 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:37.935 04:56:38 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:37.935 04:56:38 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:37.935 04:56:38 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.193 04:56:38 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:38.193 04:56:38 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:38.193 04:56:38 -- common/autotest_common.sh@1593 -- # return 0 00:05:38.193 04:56:38 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:38.193 04:56:38 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:38.193 04:56:38 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:38.193 04:56:38 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:38.193 04:56:38 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:38.193 04:56:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.193 04:56:38 -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 04:56:38 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:38.193 04:56:38 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:38.193 04:56:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.193 04:56:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.193 04:56:38 -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 ************************************ 00:05:38.193 START TEST env 00:05:38.193 ************************************ 00:05:38.193 04:56:38 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:38.193 * Looking for test storage... 00:05:38.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:38.193 04:56:38 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:38.193 04:56:38 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.193 04:56:38 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.193 04:56:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 ************************************ 00:05:38.193 START TEST env_memory 00:05:38.194 ************************************ 00:05:38.194 04:56:38 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:38.194 00:05:38.194 00:05:38.194 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.194 http://cunit.sourceforge.net/ 00:05:38.194 00:05:38.194 00:05:38.194 Suite: memory 00:05:38.194 Test: alloc and free memory map ...[2024-07-23 04:56:38.308976] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:38.194 passed 00:05:38.194 Test: mem map translation ...[2024-07-23 04:56:38.340135] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:38.194 [2024-07-23 04:56:38.340176] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:38.194 [2024-07-23 04:56:38.340240] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:38.194 [2024-07-23 04:56:38.340259] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:38.194 passed 00:05:38.194 Test: mem map registration ...[2024-07-23 04:56:38.404583] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:38.194 [2024-07-23 04:56:38.404621] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:38.452 passed 00:05:38.452 Test: mem map adjacent registrations ...passed 00:05:38.452 00:05:38.452 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.452 suites 1 1 n/a 0 0 00:05:38.452 tests 4 4 4 0 0 00:05:38.453 asserts 152 152 152 0 n/a 00:05:38.453 00:05:38.453 Elapsed time = 0.213 seconds 00:05:38.453 00:05:38.453 real 0m0.228s 00:05:38.453 user 0m0.211s 00:05:38.453 sys 0m0.013s 00:05:38.453 04:56:38 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.453 04:56:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:38.453 ************************************ 00:05:38.453 END TEST env_memory 00:05:38.453 ************************************ 00:05:38.453 04:56:38 env -- common/autotest_common.sh@1142 -- # return 0 00:05:38.453 04:56:38 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:38.453 04:56:38 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.453 04:56:38 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.453 04:56:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.453 ************************************ 00:05:38.453 START TEST env_vtophys 00:05:38.453 ************************************ 00:05:38.453 04:56:38 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:38.453 EAL: lib.eal log level changed from notice to debug 00:05:38.453 EAL: Detected lcore 0 as core 0 on socket 0 00:05:38.453 EAL: Detected lcore 1 as core 0 on socket 0 00:05:38.453 EAL: Detected lcore 2 as core 0 on socket 0 00:05:38.453 EAL: Detected lcore 3 as core 0 on socket 0 00:05:38.453 EAL: Detected lcore 4 as core 0 on socket 0 00:05:38.453 EAL: Detected lcore 5 as core 0 on socket 0 00:05:38.453 EAL: Detected lcore 6 as core 0 on socket 0 00:05:38.453 EAL: Detected lcore 7 as core 0 on socket 0 00:05:38.453 EAL: Detected lcore 8 as core 0 on socket 0 00:05:38.453 EAL: Detected lcore 9 as core 0 on socket 0 00:05:38.453 EAL: Maximum logical cores by configuration: 128 00:05:38.453 EAL: Detected CPU lcores: 10 00:05:38.453 EAL: Detected NUMA nodes: 1 00:05:38.453 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:38.453 EAL: Detected shared linkage of DPDK 00:05:38.453 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:38.453 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:38.453 EAL: Registered [vdev] bus. 00:05:38.453 EAL: bus.vdev log level changed from disabled to notice 00:05:38.453 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:38.453 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:38.453 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:38.453 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:38.453 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:38.453 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:38.453 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:38.453 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:38.453 EAL: No shared files mode enabled, IPC will be disabled 00:05:38.453 EAL: No shared files mode enabled, IPC is disabled 00:05:38.453 EAL: Selected IOVA mode 'PA' 00:05:38.453 EAL: Probing VFIO support... 00:05:38.453 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:38.453 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:38.453 EAL: Ask a virtual area of 0x2e000 bytes 00:05:38.453 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:38.453 EAL: Setting up physically contiguous memory... 00:05:38.453 EAL: Setting maximum number of open files to 524288 00:05:38.453 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:38.453 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:38.453 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.453 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:38.453 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.453 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.453 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:38.453 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:38.453 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.453 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:38.453 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.453 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.453 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:38.453 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:38.453 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.453 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:38.453 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.453 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.453 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:38.453 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:38.453 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.453 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:38.453 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.453 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.453 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:38.453 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:38.453 EAL: Hugepages will be freed exactly as allocated. 00:05:38.453 EAL: No shared files mode enabled, IPC is disabled 00:05:38.453 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: TSC frequency is ~2200000 KHz 00:05:38.712 EAL: Main lcore 0 is ready (tid=7f50ef632a00;cpuset=[0]) 00:05:38.712 EAL: Trying to obtain current memory policy. 00:05:38.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.712 EAL: Restoring previous memory policy: 0 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was expanded by 2MB 00:05:38.712 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:38.712 EAL: Mem event callback 'spdk:(nil)' registered 00:05:38.712 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:38.712 00:05:38.712 00:05:38.712 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.712 http://cunit.sourceforge.net/ 00:05:38.712 00:05:38.712 00:05:38.712 Suite: components_suite 00:05:38.712 Test: vtophys_malloc_test ...passed 00:05:38.712 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:38.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.712 EAL: Restoring previous memory policy: 4 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was expanded by 4MB 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was shrunk by 4MB 00:05:38.712 EAL: Trying to obtain current memory policy. 00:05:38.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.712 EAL: Restoring previous memory policy: 4 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was expanded by 6MB 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was shrunk by 6MB 00:05:38.712 EAL: Trying to obtain current memory policy. 00:05:38.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.712 EAL: Restoring previous memory policy: 4 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was expanded by 10MB 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was shrunk by 10MB 00:05:38.712 EAL: Trying to obtain current memory policy. 00:05:38.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.712 EAL: Restoring previous memory policy: 4 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was expanded by 18MB 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was shrunk by 18MB 00:05:38.712 EAL: Trying to obtain current memory policy. 00:05:38.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.712 EAL: Restoring previous memory policy: 4 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was expanded by 34MB 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was shrunk by 34MB 00:05:38.712 EAL: Trying to obtain current memory policy. 00:05:38.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.712 EAL: Restoring previous memory policy: 4 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was expanded by 66MB 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was shrunk by 66MB 00:05:38.712 EAL: Trying to obtain current memory policy. 00:05:38.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.712 EAL: Restoring previous memory policy: 4 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was expanded by 130MB 00:05:38.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.712 EAL: request: mp_malloc_sync 00:05:38.712 EAL: No shared files mode enabled, IPC is disabled 00:05:38.712 EAL: Heap on socket 0 was shrunk by 130MB 00:05:38.713 EAL: Trying to obtain current memory policy. 00:05:38.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.713 EAL: Restoring previous memory policy: 4 00:05:38.713 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.713 EAL: request: mp_malloc_sync 00:05:38.713 EAL: No shared files mode enabled, IPC is disabled 00:05:38.713 EAL: Heap on socket 0 was expanded by 258MB 00:05:38.972 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.972 EAL: request: mp_malloc_sync 00:05:38.972 EAL: No shared files mode enabled, IPC is disabled 00:05:38.972 EAL: Heap on socket 0 was shrunk by 258MB 00:05:38.972 EAL: Trying to obtain current memory policy. 00:05:38.972 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.972 EAL: Restoring previous memory policy: 4 00:05:38.972 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.972 EAL: request: mp_malloc_sync 00:05:38.972 EAL: No shared files mode enabled, IPC is disabled 00:05:38.972 EAL: Heap on socket 0 was expanded by 514MB 00:05:39.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.231 EAL: request: mp_malloc_sync 00:05:39.231 EAL: No shared files mode enabled, IPC is disabled 00:05:39.231 EAL: Heap on socket 0 was shrunk by 514MB 00:05:39.231 EAL: Trying to obtain current memory policy. 00:05:39.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.490 EAL: Restoring previous memory policy: 4 00:05:39.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.490 EAL: request: mp_malloc_sync 00:05:39.490 EAL: No shared files mode enabled, IPC is disabled 00:05:39.490 EAL: Heap on socket 0 was expanded by 1026MB 00:05:39.749 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.749 passed 00:05:39.749 00:05:39.749 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.749 suites 1 1 n/a 0 0 00:05:39.749 tests 2 2 2 0 0 00:05:39.749 asserts 5295 5295 5295 0 n/a 00:05:39.749 00:05:39.749 Elapsed time = 1.192 seconds 00:05:39.749 EAL: request: mp_malloc_sync 00:05:39.749 EAL: No shared files mode enabled, IPC is disabled 00:05:39.749 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:39.749 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.749 EAL: request: mp_malloc_sync 00:05:39.749 EAL: No shared files mode enabled, IPC is disabled 00:05:39.749 EAL: Heap on socket 0 was shrunk by 2MB 00:05:39.749 EAL: No shared files mode enabled, IPC is disabled 00:05:39.749 EAL: No shared files mode enabled, IPC is disabled 00:05:39.749 EAL: No shared files mode enabled, IPC is disabled 00:05:39.749 00:05:39.749 real 0m1.377s 00:05:39.749 user 0m0.763s 00:05:39.749 sys 0m0.487s 00:05:39.749 04:56:39 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.749 04:56:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:39.749 ************************************ 00:05:39.749 END TEST env_vtophys 00:05:39.749 ************************************ 00:05:39.749 04:56:39 env -- common/autotest_common.sh@1142 -- # return 0 00:05:39.749 04:56:39 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.749 04:56:39 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.749 04:56:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.749 04:56:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.749 ************************************ 00:05:39.749 START TEST env_pci 00:05:39.749 ************************************ 00:05:40.008 04:56:39 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:40.008 00:05:40.008 00:05:40.008 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.008 http://cunit.sourceforge.net/ 00:05:40.008 00:05:40.008 00:05:40.008 Suite: pci 00:05:40.008 Test: pci_hook ...[2024-07-23 04:56:39.981917] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70637 has claimed it 00:05:40.008 passed 00:05:40.008 00:05:40.008 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.008 suites 1 1 n/a 0 0 00:05:40.008 tests 1 1 1 0 0 00:05:40.008 asserts 25 25 25 0 n/a 00:05:40.008 00:05:40.008 Elapsed time = 0.002 seconds 00:05:40.008 EAL: Cannot find device (10000:00:01.0) 00:05:40.008 EAL: Failed to attach device on primary process 00:05:40.008 00:05:40.008 real 0m0.019s 00:05:40.008 user 0m0.010s 00:05:40.008 sys 0m0.008s 00:05:40.008 04:56:39 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.008 04:56:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:40.008 ************************************ 00:05:40.008 END TEST env_pci 00:05:40.008 ************************************ 00:05:40.008 04:56:40 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.008 04:56:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:40.008 04:56:40 env -- env/env.sh@15 -- # uname 00:05:40.008 04:56:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.008 04:56:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.008 04:56:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.008 04:56:40 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:40.008 04:56:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.008 04:56:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.008 ************************************ 00:05:40.008 START TEST env_dpdk_post_init 00:05:40.008 ************************************ 00:05:40.008 04:56:40 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.008 EAL: Detected CPU lcores: 10 00:05:40.008 EAL: Detected NUMA nodes: 1 00:05:40.008 EAL: Detected shared linkage of DPDK 00:05:40.008 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.008 EAL: Selected IOVA mode 'PA' 00:05:40.008 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.008 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:40.008 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:40.008 Starting DPDK initialization... 00:05:40.008 Starting SPDK post initialization... 00:05:40.008 SPDK NVMe probe 00:05:40.008 Attaching to 0000:00:10.0 00:05:40.008 Attaching to 0000:00:11.0 00:05:40.008 Attached to 0000:00:10.0 00:05:40.008 Attached to 0000:00:11.0 00:05:40.008 Cleaning up... 00:05:40.008 00:05:40.008 real 0m0.171s 00:05:40.008 user 0m0.040s 00:05:40.008 sys 0m0.030s 00:05:40.008 04:56:40 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.008 04:56:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.008 ************************************ 00:05:40.008 END TEST env_dpdk_post_init 00:05:40.008 ************************************ 00:05:40.267 04:56:40 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.267 04:56:40 env -- env/env.sh@26 -- # uname 00:05:40.267 04:56:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:40.267 04:56:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.267 04:56:40 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.267 04:56:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.267 04:56:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.267 ************************************ 00:05:40.267 START TEST env_mem_callbacks 00:05:40.267 ************************************ 00:05:40.267 04:56:40 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.267 EAL: Detected CPU lcores: 10 00:05:40.267 EAL: Detected NUMA nodes: 1 00:05:40.267 EAL: Detected shared linkage of DPDK 00:05:40.267 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.267 EAL: Selected IOVA mode 'PA' 00:05:40.267 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.267 00:05:40.267 00:05:40.267 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.267 http://cunit.sourceforge.net/ 00:05:40.267 00:05:40.267 00:05:40.267 Suite: memory 00:05:40.267 Test: test ... 00:05:40.267 register 0x200000200000 2097152 00:05:40.267 malloc 3145728 00:05:40.267 register 0x200000400000 4194304 00:05:40.267 buf 0x200000500000 len 3145728 PASSED 00:05:40.267 malloc 64 00:05:40.267 buf 0x2000004fff40 len 64 PASSED 00:05:40.267 malloc 4194304 00:05:40.267 register 0x200000800000 6291456 00:05:40.267 buf 0x200000a00000 len 4194304 PASSED 00:05:40.267 free 0x200000500000 3145728 00:05:40.267 free 0x2000004fff40 64 00:05:40.267 unregister 0x200000400000 4194304 PASSED 00:05:40.267 free 0x200000a00000 4194304 00:05:40.267 unregister 0x200000800000 6291456 PASSED 00:05:40.267 malloc 8388608 00:05:40.267 register 0x200000400000 10485760 00:05:40.267 buf 0x200000600000 len 8388608 PASSED 00:05:40.267 free 0x200000600000 8388608 00:05:40.267 unregister 0x200000400000 10485760 PASSED 00:05:40.267 passed 00:05:40.267 00:05:40.267 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.267 suites 1 1 n/a 0 0 00:05:40.267 tests 1 1 1 0 0 00:05:40.267 asserts 15 15 15 0 n/a 00:05:40.267 00:05:40.267 Elapsed time = 0.008 seconds 00:05:40.267 00:05:40.267 real 0m0.141s 00:05:40.267 user 0m0.016s 00:05:40.267 sys 0m0.024s 00:05:40.267 04:56:40 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.267 04:56:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:40.267 ************************************ 00:05:40.267 END TEST env_mem_callbacks 00:05:40.267 ************************************ 00:05:40.267 04:56:40 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.267 00:05:40.267 real 0m2.276s 00:05:40.267 user 0m1.153s 00:05:40.267 sys 0m0.779s 00:05:40.267 04:56:40 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.267 04:56:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.267 ************************************ 00:05:40.267 END TEST env 00:05:40.267 ************************************ 00:05:40.527 04:56:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:40.527 04:56:40 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:40.527 04:56:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.527 04:56:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.527 04:56:40 -- common/autotest_common.sh@10 -- # set +x 00:05:40.527 ************************************ 00:05:40.527 START TEST rpc 00:05:40.527 ************************************ 00:05:40.527 04:56:40 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:40.527 * Looking for test storage... 00:05:40.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.527 04:56:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70746 00:05:40.527 04:56:40 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:40.527 04:56:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.527 04:56:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70746 00:05:40.527 04:56:40 rpc -- common/autotest_common.sh@829 -- # '[' -z 70746 ']' 00:05:40.527 04:56:40 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.527 04:56:40 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.527 04:56:40 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.527 04:56:40 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.527 04:56:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.527 [2024-07-23 04:56:40.681370] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:05:40.527 [2024-07-23 04:56:40.681628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70746 ] 00:05:40.786 [2024-07-23 04:56:40.822092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.786 [2024-07-23 04:56:40.891553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:40.786 [2024-07-23 04:56:40.891864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70746' to capture a snapshot of events at runtime. 00:05:40.786 [2024-07-23 04:56:40.892058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:40.786 [2024-07-23 04:56:40.892201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:40.786 [2024-07-23 04:56:40.892249] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70746 for offline analysis/debug. 00:05:40.786 [2024-07-23 04:56:40.892500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.723 04:56:41 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.723 04:56:41 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:41.723 04:56:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.723 04:56:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.723 04:56:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.723 04:56:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.723 04:56:41 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.723 04:56:41 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.723 04:56:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.723 ************************************ 00:05:41.723 START TEST rpc_integrity 00:05:41.723 ************************************ 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.723 { 00:05:41.723 "name": "Malloc0", 00:05:41.723 "aliases": [ 00:05:41.723 "3eeb2e48-5434-4184-a6b2-597731bbba65" 00:05:41.723 ], 00:05:41.723 "product_name": "Malloc disk", 00:05:41.723 "block_size": 512, 00:05:41.723 "num_blocks": 16384, 00:05:41.723 "uuid": "3eeb2e48-5434-4184-a6b2-597731bbba65", 00:05:41.723 "assigned_rate_limits": { 00:05:41.723 "rw_ios_per_sec": 0, 00:05:41.723 "rw_mbytes_per_sec": 0, 00:05:41.723 "r_mbytes_per_sec": 0, 00:05:41.723 "w_mbytes_per_sec": 0 00:05:41.723 }, 00:05:41.723 "claimed": false, 00:05:41.723 "zoned": false, 00:05:41.723 "supported_io_types": { 00:05:41.723 "read": true, 00:05:41.723 "write": true, 00:05:41.723 "unmap": true, 00:05:41.723 "flush": true, 00:05:41.723 "reset": true, 00:05:41.723 "nvme_admin": false, 00:05:41.723 "nvme_io": false, 00:05:41.723 "nvme_io_md": false, 00:05:41.723 "write_zeroes": true, 00:05:41.723 "zcopy": true, 00:05:41.723 "get_zone_info": false, 00:05:41.723 "zone_management": false, 00:05:41.723 "zone_append": false, 00:05:41.723 "compare": false, 00:05:41.723 "compare_and_write": false, 00:05:41.723 "abort": true, 00:05:41.723 "seek_hole": false, 00:05:41.723 "seek_data": false, 00:05:41.723 "copy": true, 00:05:41.723 "nvme_iov_md": false 00:05:41.723 }, 00:05:41.723 "memory_domains": [ 00:05:41.723 { 00:05:41.723 "dma_device_id": "system", 00:05:41.723 "dma_device_type": 1 00:05:41.723 }, 00:05:41.723 { 00:05:41.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.723 "dma_device_type": 2 00:05:41.723 } 00:05:41.723 ], 00:05:41.723 "driver_specific": {} 00:05:41.723 } 00:05:41.723 ]' 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.723 [2024-07-23 04:56:41.790066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.723 [2024-07-23 04:56:41.790106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.723 [2024-07-23 04:56:41.790147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e5af50 00:05:41.723 [2024-07-23 04:56:41.790172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.723 [2024-07-23 04:56:41.791722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.723 [2024-07-23 04:56:41.791752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.723 Passthru0 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.723 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.723 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.723 { 00:05:41.723 "name": "Malloc0", 00:05:41.723 "aliases": [ 00:05:41.723 "3eeb2e48-5434-4184-a6b2-597731bbba65" 00:05:41.723 ], 00:05:41.723 "product_name": "Malloc disk", 00:05:41.723 "block_size": 512, 00:05:41.723 "num_blocks": 16384, 00:05:41.723 "uuid": "3eeb2e48-5434-4184-a6b2-597731bbba65", 00:05:41.723 "assigned_rate_limits": { 00:05:41.723 "rw_ios_per_sec": 0, 00:05:41.723 "rw_mbytes_per_sec": 0, 00:05:41.723 "r_mbytes_per_sec": 0, 00:05:41.723 "w_mbytes_per_sec": 0 00:05:41.723 }, 00:05:41.723 "claimed": true, 00:05:41.723 "claim_type": "exclusive_write", 00:05:41.723 "zoned": false, 00:05:41.723 "supported_io_types": { 00:05:41.723 "read": true, 00:05:41.723 "write": true, 00:05:41.723 "unmap": true, 00:05:41.723 "flush": true, 00:05:41.723 "reset": true, 00:05:41.723 "nvme_admin": false, 00:05:41.723 "nvme_io": false, 00:05:41.723 "nvme_io_md": false, 00:05:41.723 "write_zeroes": true, 00:05:41.723 "zcopy": true, 00:05:41.723 "get_zone_info": false, 00:05:41.723 "zone_management": false, 00:05:41.723 "zone_append": false, 00:05:41.723 "compare": false, 00:05:41.723 "compare_and_write": false, 00:05:41.723 "abort": true, 00:05:41.723 "seek_hole": false, 00:05:41.723 "seek_data": false, 00:05:41.723 "copy": true, 00:05:41.723 "nvme_iov_md": false 00:05:41.723 }, 00:05:41.723 "memory_domains": [ 00:05:41.723 { 00:05:41.723 "dma_device_id": "system", 00:05:41.723 "dma_device_type": 1 00:05:41.723 }, 00:05:41.723 { 00:05:41.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.723 "dma_device_type": 2 00:05:41.723 } 00:05:41.723 ], 00:05:41.723 "driver_specific": {} 00:05:41.723 }, 00:05:41.723 { 00:05:41.723 "name": "Passthru0", 00:05:41.723 "aliases": [ 00:05:41.723 "2a6e0535-a450-59cc-82f2-ddfb85b4937c" 00:05:41.723 ], 00:05:41.723 "product_name": "passthru", 00:05:41.723 "block_size": 512, 00:05:41.723 "num_blocks": 16384, 00:05:41.723 "uuid": "2a6e0535-a450-59cc-82f2-ddfb85b4937c", 00:05:41.723 "assigned_rate_limits": { 00:05:41.723 "rw_ios_per_sec": 0, 00:05:41.723 "rw_mbytes_per_sec": 0, 00:05:41.723 "r_mbytes_per_sec": 0, 00:05:41.723 "w_mbytes_per_sec": 0 00:05:41.723 }, 00:05:41.723 "claimed": false, 00:05:41.724 "zoned": false, 00:05:41.724 "supported_io_types": { 00:05:41.724 "read": true, 00:05:41.724 "write": true, 00:05:41.724 "unmap": true, 00:05:41.724 "flush": true, 00:05:41.724 "reset": true, 00:05:41.724 "nvme_admin": false, 00:05:41.724 "nvme_io": false, 00:05:41.724 "nvme_io_md": false, 00:05:41.724 "write_zeroes": true, 00:05:41.724 "zcopy": true, 00:05:41.724 "get_zone_info": false, 00:05:41.724 "zone_management": false, 00:05:41.724 "zone_append": false, 00:05:41.724 "compare": false, 00:05:41.724 "compare_and_write": false, 00:05:41.724 "abort": true, 00:05:41.724 "seek_hole": false, 00:05:41.724 "seek_data": false, 00:05:41.724 "copy": true, 00:05:41.724 "nvme_iov_md": false 00:05:41.724 }, 00:05:41.724 "memory_domains": [ 00:05:41.724 { 00:05:41.724 "dma_device_id": "system", 00:05:41.724 "dma_device_type": 1 00:05:41.724 }, 00:05:41.724 { 00:05:41.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.724 "dma_device_type": 2 00:05:41.724 } 00:05:41.724 ], 00:05:41.724 "driver_specific": { 00:05:41.724 "passthru": { 00:05:41.724 "name": "Passthru0", 00:05:41.724 "base_bdev_name": "Malloc0" 00:05:41.724 } 00:05:41.724 } 00:05:41.724 } 00:05:41.724 ]' 00:05:41.724 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.724 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.724 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.724 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.724 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.724 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.724 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.724 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.724 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.724 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.724 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.724 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.724 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.724 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.724 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.724 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.983 ************************************ 00:05:41.983 END TEST rpc_integrity 00:05:41.983 ************************************ 00:05:41.983 04:56:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.983 00:05:41.983 real 0m0.315s 00:05:41.983 user 0m0.215s 00:05:41.983 sys 0m0.035s 00:05:41.983 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.983 04:56:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.983 04:56:41 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:41.983 04:56:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.983 04:56:41 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.983 04:56:41 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.983 04:56:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.983 ************************************ 00:05:41.983 START TEST rpc_plugins 00:05:41.983 ************************************ 00:05:41.983 04:56:41 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:41.983 04:56:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.983 04:56:41 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.983 04:56:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.983 04:56:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.983 04:56:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.983 04:56:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.983 04:56:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.983 04:56:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.983 04:56:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.983 04:56:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.983 { 00:05:41.983 "name": "Malloc1", 00:05:41.983 "aliases": [ 00:05:41.983 "0477f816-d762-46b6-a658-8f631cb731f8" 00:05:41.983 ], 00:05:41.983 "product_name": "Malloc disk", 00:05:41.983 "block_size": 4096, 00:05:41.983 "num_blocks": 256, 00:05:41.983 "uuid": "0477f816-d762-46b6-a658-8f631cb731f8", 00:05:41.983 "assigned_rate_limits": { 00:05:41.983 "rw_ios_per_sec": 0, 00:05:41.983 "rw_mbytes_per_sec": 0, 00:05:41.983 "r_mbytes_per_sec": 0, 00:05:41.983 "w_mbytes_per_sec": 0 00:05:41.983 }, 00:05:41.983 "claimed": false, 00:05:41.983 "zoned": false, 00:05:41.983 "supported_io_types": { 00:05:41.983 "read": true, 00:05:41.983 "write": true, 00:05:41.983 "unmap": true, 00:05:41.983 "flush": true, 00:05:41.983 "reset": true, 00:05:41.983 "nvme_admin": false, 00:05:41.983 "nvme_io": false, 00:05:41.983 "nvme_io_md": false, 00:05:41.983 "write_zeroes": true, 00:05:41.983 "zcopy": true, 00:05:41.983 "get_zone_info": false, 00:05:41.983 "zone_management": false, 00:05:41.983 "zone_append": false, 00:05:41.983 "compare": false, 00:05:41.983 "compare_and_write": false, 00:05:41.983 "abort": true, 00:05:41.983 "seek_hole": false, 00:05:41.983 "seek_data": false, 00:05:41.983 "copy": true, 00:05:41.983 "nvme_iov_md": false 00:05:41.983 }, 00:05:41.983 "memory_domains": [ 00:05:41.983 { 00:05:41.983 "dma_device_id": "system", 00:05:41.983 "dma_device_type": 1 00:05:41.983 }, 00:05:41.983 { 00:05:41.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.983 "dma_device_type": 2 00:05:41.983 } 00:05:41.983 ], 00:05:41.983 "driver_specific": {} 00:05:41.983 } 00:05:41.983 ]' 00:05:41.983 04:56:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:41.983 04:56:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:41.983 04:56:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:41.983 04:56:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.983 04:56:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.983 04:56:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.983 04:56:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:41.983 04:56:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.983 04:56:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.983 04:56:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.983 04:56:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:41.983 04:56:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:41.983 ************************************ 00:05:41.983 END TEST rpc_plugins 00:05:41.983 ************************************ 00:05:41.983 04:56:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:41.983 00:05:41.983 real 0m0.167s 00:05:41.983 user 0m0.107s 00:05:41.983 sys 0m0.021s 00:05:41.983 04:56:42 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.983 04:56:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.242 04:56:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.242 04:56:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:42.242 04:56:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.242 04:56:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.242 04:56:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.242 ************************************ 00:05:42.242 START TEST rpc_trace_cmd_test 00:05:42.242 ************************************ 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:42.242 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70746", 00:05:42.242 "tpoint_group_mask": "0x8", 00:05:42.242 "iscsi_conn": { 00:05:42.242 "mask": "0x2", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 }, 00:05:42.242 "scsi": { 00:05:42.242 "mask": "0x4", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 }, 00:05:42.242 "bdev": { 00:05:42.242 "mask": "0x8", 00:05:42.242 "tpoint_mask": "0xffffffffffffffff" 00:05:42.242 }, 00:05:42.242 "nvmf_rdma": { 00:05:42.242 "mask": "0x10", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 }, 00:05:42.242 "nvmf_tcp": { 00:05:42.242 "mask": "0x20", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 }, 00:05:42.242 "ftl": { 00:05:42.242 "mask": "0x40", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 }, 00:05:42.242 "blobfs": { 00:05:42.242 "mask": "0x80", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 }, 00:05:42.242 "dsa": { 00:05:42.242 "mask": "0x200", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 }, 00:05:42.242 "thread": { 00:05:42.242 "mask": "0x400", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 }, 00:05:42.242 "nvme_pcie": { 00:05:42.242 "mask": "0x800", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 }, 00:05:42.242 "iaa": { 00:05:42.242 "mask": "0x1000", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 }, 00:05:42.242 "nvme_tcp": { 00:05:42.242 "mask": "0x2000", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 }, 00:05:42.242 "bdev_nvme": { 00:05:42.242 "mask": "0x4000", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 }, 00:05:42.242 "sock": { 00:05:42.242 "mask": "0x8000", 00:05:42.242 "tpoint_mask": "0x0" 00:05:42.242 } 00:05:42.242 }' 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.242 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.502 ************************************ 00:05:42.502 END TEST rpc_trace_cmd_test 00:05:42.502 ************************************ 00:05:42.502 04:56:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.502 00:05:42.502 real 0m0.289s 00:05:42.502 user 0m0.249s 00:05:42.502 sys 0m0.018s 00:05:42.502 04:56:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.502 04:56:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.502 04:56:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.502 04:56:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.502 04:56:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.502 04:56:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.502 04:56:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.502 04:56:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.502 04:56:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.502 ************************************ 00:05:42.502 START TEST rpc_daemon_integrity 00:05:42.502 ************************************ 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.502 { 00:05:42.502 "name": "Malloc2", 00:05:42.502 "aliases": [ 00:05:42.502 "06d06e75-257a-47d8-b26a-fedd3e62c1fd" 00:05:42.502 ], 00:05:42.502 "product_name": "Malloc disk", 00:05:42.502 "block_size": 512, 00:05:42.502 "num_blocks": 16384, 00:05:42.502 "uuid": "06d06e75-257a-47d8-b26a-fedd3e62c1fd", 00:05:42.502 "assigned_rate_limits": { 00:05:42.502 "rw_ios_per_sec": 0, 00:05:42.502 "rw_mbytes_per_sec": 0, 00:05:42.502 "r_mbytes_per_sec": 0, 00:05:42.502 "w_mbytes_per_sec": 0 00:05:42.502 }, 00:05:42.502 "claimed": false, 00:05:42.502 "zoned": false, 00:05:42.502 "supported_io_types": { 00:05:42.502 "read": true, 00:05:42.502 "write": true, 00:05:42.502 "unmap": true, 00:05:42.502 "flush": true, 00:05:42.502 "reset": true, 00:05:42.502 "nvme_admin": false, 00:05:42.502 "nvme_io": false, 00:05:42.502 "nvme_io_md": false, 00:05:42.502 "write_zeroes": true, 00:05:42.502 "zcopy": true, 00:05:42.502 "get_zone_info": false, 00:05:42.502 "zone_management": false, 00:05:42.502 "zone_append": false, 00:05:42.502 "compare": false, 00:05:42.502 "compare_and_write": false, 00:05:42.502 "abort": true, 00:05:42.502 "seek_hole": false, 00:05:42.502 "seek_data": false, 00:05:42.502 "copy": true, 00:05:42.502 "nvme_iov_md": false 00:05:42.502 }, 00:05:42.502 "memory_domains": [ 00:05:42.502 { 00:05:42.502 "dma_device_id": "system", 00:05:42.502 "dma_device_type": 1 00:05:42.502 }, 00:05:42.502 { 00:05:42.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.502 "dma_device_type": 2 00:05:42.502 } 00:05:42.502 ], 00:05:42.502 "driver_specific": {} 00:05:42.502 } 00:05:42.502 ]' 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.502 [2024-07-23 04:56:42.706423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.502 [2024-07-23 04:56:42.706469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.502 [2024-07-23 04:56:42.706489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e5cf10 00:05:42.502 [2024-07-23 04:56:42.706498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.502 [2024-07-23 04:56:42.707867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.502 [2024-07-23 04:56:42.707896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.502 Passthru0 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.502 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.761 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.761 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.761 { 00:05:42.761 "name": "Malloc2", 00:05:42.761 "aliases": [ 00:05:42.762 "06d06e75-257a-47d8-b26a-fedd3e62c1fd" 00:05:42.762 ], 00:05:42.762 "product_name": "Malloc disk", 00:05:42.762 "block_size": 512, 00:05:42.762 "num_blocks": 16384, 00:05:42.762 "uuid": "06d06e75-257a-47d8-b26a-fedd3e62c1fd", 00:05:42.762 "assigned_rate_limits": { 00:05:42.762 "rw_ios_per_sec": 0, 00:05:42.762 "rw_mbytes_per_sec": 0, 00:05:42.762 "r_mbytes_per_sec": 0, 00:05:42.762 "w_mbytes_per_sec": 0 00:05:42.762 }, 00:05:42.762 "claimed": true, 00:05:42.762 "claim_type": "exclusive_write", 00:05:42.762 "zoned": false, 00:05:42.762 "supported_io_types": { 00:05:42.762 "read": true, 00:05:42.762 "write": true, 00:05:42.762 "unmap": true, 00:05:42.762 "flush": true, 00:05:42.762 "reset": true, 00:05:42.762 "nvme_admin": false, 00:05:42.762 "nvme_io": false, 00:05:42.762 "nvme_io_md": false, 00:05:42.762 "write_zeroes": true, 00:05:42.762 "zcopy": true, 00:05:42.762 "get_zone_info": false, 00:05:42.762 "zone_management": false, 00:05:42.762 "zone_append": false, 00:05:42.762 "compare": false, 00:05:42.762 "compare_and_write": false, 00:05:42.762 "abort": true, 00:05:42.762 "seek_hole": false, 00:05:42.762 "seek_data": false, 00:05:42.762 "copy": true, 00:05:42.762 "nvme_iov_md": false 00:05:42.762 }, 00:05:42.762 "memory_domains": [ 00:05:42.762 { 00:05:42.762 "dma_device_id": "system", 00:05:42.762 "dma_device_type": 1 00:05:42.762 }, 00:05:42.762 { 00:05:42.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.762 "dma_device_type": 2 00:05:42.762 } 00:05:42.762 ], 00:05:42.762 "driver_specific": {} 00:05:42.762 }, 00:05:42.762 { 00:05:42.762 "name": "Passthru0", 00:05:42.762 "aliases": [ 00:05:42.762 "f923b6b1-c296-558d-8b72-dcca7bc677f3" 00:05:42.762 ], 00:05:42.762 "product_name": "passthru", 00:05:42.762 "block_size": 512, 00:05:42.762 "num_blocks": 16384, 00:05:42.762 "uuid": "f923b6b1-c296-558d-8b72-dcca7bc677f3", 00:05:42.762 "assigned_rate_limits": { 00:05:42.762 "rw_ios_per_sec": 0, 00:05:42.762 "rw_mbytes_per_sec": 0, 00:05:42.762 "r_mbytes_per_sec": 0, 00:05:42.762 "w_mbytes_per_sec": 0 00:05:42.762 }, 00:05:42.762 "claimed": false, 00:05:42.762 "zoned": false, 00:05:42.762 "supported_io_types": { 00:05:42.762 "read": true, 00:05:42.762 "write": true, 00:05:42.762 "unmap": true, 00:05:42.762 "flush": true, 00:05:42.762 "reset": true, 00:05:42.762 "nvme_admin": false, 00:05:42.762 "nvme_io": false, 00:05:42.762 "nvme_io_md": false, 00:05:42.762 "write_zeroes": true, 00:05:42.762 "zcopy": true, 00:05:42.762 "get_zone_info": false, 00:05:42.762 "zone_management": false, 00:05:42.762 "zone_append": false, 00:05:42.762 "compare": false, 00:05:42.762 "compare_and_write": false, 00:05:42.762 "abort": true, 00:05:42.762 "seek_hole": false, 00:05:42.762 "seek_data": false, 00:05:42.762 "copy": true, 00:05:42.762 "nvme_iov_md": false 00:05:42.762 }, 00:05:42.762 "memory_domains": [ 00:05:42.762 { 00:05:42.762 "dma_device_id": "system", 00:05:42.762 "dma_device_type": 1 00:05:42.762 }, 00:05:42.762 { 00:05:42.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.762 "dma_device_type": 2 00:05:42.762 } 00:05:42.762 ], 00:05:42.762 "driver_specific": { 00:05:42.762 "passthru": { 00:05:42.762 "name": "Passthru0", 00:05:42.762 "base_bdev_name": "Malloc2" 00:05:42.762 } 00:05:42.762 } 00:05:42.762 } 00:05:42.762 ]' 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.762 ************************************ 00:05:42.762 END TEST rpc_daemon_integrity 00:05:42.762 ************************************ 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.762 00:05:42.762 real 0m0.329s 00:05:42.762 user 0m0.215s 00:05:42.762 sys 0m0.040s 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.762 04:56:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.762 04:56:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.762 04:56:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.762 04:56:42 rpc -- rpc/rpc.sh@84 -- # killprocess 70746 00:05:42.762 04:56:42 rpc -- common/autotest_common.sh@948 -- # '[' -z 70746 ']' 00:05:42.762 04:56:42 rpc -- common/autotest_common.sh@952 -- # kill -0 70746 00:05:42.762 04:56:42 rpc -- common/autotest_common.sh@953 -- # uname 00:05:42.762 04:56:42 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.762 04:56:42 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70746 00:05:42.762 killing process with pid 70746 00:05:42.762 04:56:42 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.762 04:56:42 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.762 04:56:42 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70746' 00:05:42.762 04:56:42 rpc -- common/autotest_common.sh@967 -- # kill 70746 00:05:42.762 04:56:42 rpc -- common/autotest_common.sh@972 -- # wait 70746 00:05:43.329 00:05:43.329 real 0m2.781s 00:05:43.329 user 0m3.616s 00:05:43.329 sys 0m0.668s 00:05:43.329 04:56:43 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.329 ************************************ 00:05:43.329 END TEST rpc 00:05:43.329 ************************************ 00:05:43.329 04:56:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.329 04:56:43 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.329 04:56:43 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:43.329 04:56:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.329 04:56:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.329 04:56:43 -- common/autotest_common.sh@10 -- # set +x 00:05:43.329 ************************************ 00:05:43.329 START TEST skip_rpc 00:05:43.329 ************************************ 00:05:43.329 04:56:43 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:43.329 * Looking for test storage... 00:05:43.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.329 04:56:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:43.329 04:56:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:43.329 04:56:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:43.329 04:56:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.329 04:56:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.329 04:56:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.329 ************************************ 00:05:43.329 START TEST skip_rpc 00:05:43.330 ************************************ 00:05:43.330 04:56:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:43.330 04:56:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70943 00:05:43.330 04:56:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.330 04:56:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.330 04:56:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.330 [2024-07-23 04:56:43.512515] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:05:43.330 [2024-07-23 04:56:43.512607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70943 ] 00:05:43.588 [2024-07-23 04:56:43.650777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.588 [2024-07-23 04:56:43.702002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.859 04:56:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70943 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 70943 ']' 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 70943 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70943 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.860 killing process with pid 70943 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70943' 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 70943 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 70943 00:05:48.860 ************************************ 00:05:48.860 END TEST skip_rpc 00:05:48.860 ************************************ 00:05:48.860 00:05:48.860 real 0m5.385s 00:05:48.860 user 0m5.015s 00:05:48.860 sys 0m0.282s 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.860 04:56:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.860 04:56:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:48.860 04:56:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:48.860 04:56:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.860 04:56:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.860 04:56:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.860 ************************************ 00:05:48.860 START TEST skip_rpc_with_json 00:05:48.860 ************************************ 00:05:48.860 04:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:48.860 04:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:48.860 04:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71031 00:05:48.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.860 04:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.860 04:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71031 00:05:48.860 04:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.860 04:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 71031 ']' 00:05:48.860 04:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.860 04:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.860 04:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.860 04:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.860 04:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.860 [2024-07-23 04:56:48.937916] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:05:48.860 [2024-07-23 04:56:48.938008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71031 ] 00:05:48.860 [2024-07-23 04:56:49.075693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.130 [2024-07-23 04:56:49.145196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.697 [2024-07-23 04:56:49.870859] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:49.697 request: 00:05:49.697 { 00:05:49.697 "trtype": "tcp", 00:05:49.697 "method": "nvmf_get_transports", 00:05:49.697 "req_id": 1 00:05:49.697 } 00:05:49.697 Got JSON-RPC error response 00:05:49.697 response: 00:05:49.697 { 00:05:49.697 "code": -19, 00:05:49.697 "message": "No such device" 00:05:49.697 } 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.697 [2024-07-23 04:56:49.878959] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.697 04:56:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.956 04:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.957 04:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.957 { 00:05:49.957 "subsystems": [ 00:05:49.957 { 00:05:49.957 "subsystem": "keyring", 00:05:49.957 "config": [] 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "iobuf", 00:05:49.957 "config": [ 00:05:49.957 { 00:05:49.957 "method": "iobuf_set_options", 00:05:49.957 "params": { 00:05:49.957 "small_pool_count": 8192, 00:05:49.957 "large_pool_count": 1024, 00:05:49.957 "small_bufsize": 8192, 00:05:49.957 "large_bufsize": 135168 00:05:49.957 } 00:05:49.957 } 00:05:49.957 ] 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "sock", 00:05:49.957 "config": [ 00:05:49.957 { 00:05:49.957 "method": "sock_set_default_impl", 00:05:49.957 "params": { 00:05:49.957 "impl_name": "posix" 00:05:49.957 } 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "method": "sock_impl_set_options", 00:05:49.957 "params": { 00:05:49.957 "impl_name": "ssl", 00:05:49.957 "recv_buf_size": 4096, 00:05:49.957 "send_buf_size": 4096, 00:05:49.957 "enable_recv_pipe": true, 00:05:49.957 "enable_quickack": false, 00:05:49.957 "enable_placement_id": 0, 00:05:49.957 "enable_zerocopy_send_server": true, 00:05:49.957 "enable_zerocopy_send_client": false, 00:05:49.957 "zerocopy_threshold": 0, 00:05:49.957 "tls_version": 0, 00:05:49.957 "enable_ktls": false 00:05:49.957 } 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "method": "sock_impl_set_options", 00:05:49.957 "params": { 00:05:49.957 "impl_name": "posix", 00:05:49.957 "recv_buf_size": 2097152, 00:05:49.957 "send_buf_size": 2097152, 00:05:49.957 "enable_recv_pipe": true, 00:05:49.957 "enable_quickack": false, 00:05:49.957 "enable_placement_id": 0, 00:05:49.957 "enable_zerocopy_send_server": true, 00:05:49.957 "enable_zerocopy_send_client": false, 00:05:49.957 "zerocopy_threshold": 0, 00:05:49.957 "tls_version": 0, 00:05:49.957 "enable_ktls": false 00:05:49.957 } 00:05:49.957 } 00:05:49.957 ] 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "vmd", 00:05:49.957 "config": [] 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "accel", 00:05:49.957 "config": [ 00:05:49.957 { 00:05:49.957 "method": "accel_set_options", 00:05:49.957 "params": { 00:05:49.957 "small_cache_size": 128, 00:05:49.957 "large_cache_size": 16, 00:05:49.957 "task_count": 2048, 00:05:49.957 "sequence_count": 2048, 00:05:49.957 "buf_count": 2048 00:05:49.957 } 00:05:49.957 } 00:05:49.957 ] 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "bdev", 00:05:49.957 "config": [ 00:05:49.957 { 00:05:49.957 "method": "bdev_set_options", 00:05:49.957 "params": { 00:05:49.957 "bdev_io_pool_size": 65535, 00:05:49.957 "bdev_io_cache_size": 256, 00:05:49.957 "bdev_auto_examine": true, 00:05:49.957 "iobuf_small_cache_size": 128, 00:05:49.957 "iobuf_large_cache_size": 16 00:05:49.957 } 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "method": "bdev_raid_set_options", 00:05:49.957 "params": { 00:05:49.957 "process_window_size_kb": 1024, 00:05:49.957 "process_max_bandwidth_mb_sec": 0 00:05:49.957 } 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "method": "bdev_iscsi_set_options", 00:05:49.957 "params": { 00:05:49.957 "timeout_sec": 30 00:05:49.957 } 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "method": "bdev_nvme_set_options", 00:05:49.957 "params": { 00:05:49.957 "action_on_timeout": "none", 00:05:49.957 "timeout_us": 0, 00:05:49.957 "timeout_admin_us": 0, 00:05:49.957 "keep_alive_timeout_ms": 10000, 00:05:49.957 "arbitration_burst": 0, 00:05:49.957 "low_priority_weight": 0, 00:05:49.957 "medium_priority_weight": 0, 00:05:49.957 "high_priority_weight": 0, 00:05:49.957 "nvme_adminq_poll_period_us": 10000, 00:05:49.957 "nvme_ioq_poll_period_us": 0, 00:05:49.957 "io_queue_requests": 0, 00:05:49.957 "delay_cmd_submit": true, 00:05:49.957 "transport_retry_count": 4, 00:05:49.957 "bdev_retry_count": 3, 00:05:49.957 "transport_ack_timeout": 0, 00:05:49.957 "ctrlr_loss_timeout_sec": 0, 00:05:49.957 "reconnect_delay_sec": 0, 00:05:49.957 "fast_io_fail_timeout_sec": 0, 00:05:49.957 "disable_auto_failback": false, 00:05:49.957 "generate_uuids": false, 00:05:49.957 "transport_tos": 0, 00:05:49.957 "nvme_error_stat": false, 00:05:49.957 "rdma_srq_size": 0, 00:05:49.957 "io_path_stat": false, 00:05:49.957 "allow_accel_sequence": false, 00:05:49.957 "rdma_max_cq_size": 0, 00:05:49.957 "rdma_cm_event_timeout_ms": 0, 00:05:49.957 "dhchap_digests": [ 00:05:49.957 "sha256", 00:05:49.957 "sha384", 00:05:49.957 "sha512" 00:05:49.957 ], 00:05:49.957 "dhchap_dhgroups": [ 00:05:49.957 "null", 00:05:49.957 "ffdhe2048", 00:05:49.957 "ffdhe3072", 00:05:49.957 "ffdhe4096", 00:05:49.957 "ffdhe6144", 00:05:49.957 "ffdhe8192" 00:05:49.957 ] 00:05:49.957 } 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "method": "bdev_nvme_set_hotplug", 00:05:49.957 "params": { 00:05:49.957 "period_us": 100000, 00:05:49.957 "enable": false 00:05:49.957 } 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "method": "bdev_wait_for_examine" 00:05:49.957 } 00:05:49.957 ] 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "scsi", 00:05:49.957 "config": null 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "scheduler", 00:05:49.957 "config": [ 00:05:49.957 { 00:05:49.957 "method": "framework_set_scheduler", 00:05:49.957 "params": { 00:05:49.957 "name": "static" 00:05:49.957 } 00:05:49.957 } 00:05:49.957 ] 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "vhost_scsi", 00:05:49.957 "config": [] 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "vhost_blk", 00:05:49.957 "config": [] 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "ublk", 00:05:49.957 "config": [] 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "nbd", 00:05:49.957 "config": [] 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "nvmf", 00:05:49.957 "config": [ 00:05:49.957 { 00:05:49.957 "method": "nvmf_set_config", 00:05:49.957 "params": { 00:05:49.957 "discovery_filter": "match_any", 00:05:49.957 "admin_cmd_passthru": { 00:05:49.957 "identify_ctrlr": false 00:05:49.957 } 00:05:49.957 } 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "method": "nvmf_set_max_subsystems", 00:05:49.957 "params": { 00:05:49.957 "max_subsystems": 1024 00:05:49.957 } 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "method": "nvmf_set_crdt", 00:05:49.957 "params": { 00:05:49.957 "crdt1": 0, 00:05:49.957 "crdt2": 0, 00:05:49.957 "crdt3": 0 00:05:49.957 } 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "method": "nvmf_create_transport", 00:05:49.957 "params": { 00:05:49.957 "trtype": "TCP", 00:05:49.957 "max_queue_depth": 128, 00:05:49.957 "max_io_qpairs_per_ctrlr": 127, 00:05:49.957 "in_capsule_data_size": 4096, 00:05:49.957 "max_io_size": 131072, 00:05:49.957 "io_unit_size": 131072, 00:05:49.957 "max_aq_depth": 128, 00:05:49.957 "num_shared_buffers": 511, 00:05:49.957 "buf_cache_size": 4294967295, 00:05:49.957 "dif_insert_or_strip": false, 00:05:49.957 "zcopy": false, 00:05:49.957 "c2h_success": true, 00:05:49.957 "sock_priority": 0, 00:05:49.957 "abort_timeout_sec": 1, 00:05:49.957 "ack_timeout": 0, 00:05:49.957 "data_wr_pool_size": 0 00:05:49.957 } 00:05:49.957 } 00:05:49.957 ] 00:05:49.957 }, 00:05:49.957 { 00:05:49.957 "subsystem": "iscsi", 00:05:49.957 "config": [ 00:05:49.957 { 00:05:49.957 "method": "iscsi_set_options", 00:05:49.957 "params": { 00:05:49.957 "node_base": "iqn.2016-06.io.spdk", 00:05:49.957 "max_sessions": 128, 00:05:49.957 "max_connections_per_session": 2, 00:05:49.957 "max_queue_depth": 64, 00:05:49.957 "default_time2wait": 2, 00:05:49.957 "default_time2retain": 20, 00:05:49.957 "first_burst_length": 8192, 00:05:49.957 "immediate_data": true, 00:05:49.957 "allow_duplicated_isid": false, 00:05:49.957 "error_recovery_level": 0, 00:05:49.957 "nop_timeout": 60, 00:05:49.957 "nop_in_interval": 30, 00:05:49.957 "disable_chap": false, 00:05:49.957 "require_chap": false, 00:05:49.957 "mutual_chap": false, 00:05:49.957 "chap_group": 0, 00:05:49.957 "max_large_datain_per_connection": 64, 00:05:49.957 "max_r2t_per_connection": 4, 00:05:49.957 "pdu_pool_size": 36864, 00:05:49.957 "immediate_data_pool_size": 16384, 00:05:49.957 "data_out_pool_size": 2048 00:05:49.957 } 00:05:49.957 } 00:05:49.957 ] 00:05:49.957 } 00:05:49.957 ] 00:05:49.957 } 00:05:49.958 04:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:49.958 04:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71031 00:05:49.958 04:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 71031 ']' 00:05:49.958 04:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 71031 00:05:49.958 04:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:49.958 04:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.958 04:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71031 00:05:49.958 killing process with pid 71031 00:05:49.958 04:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.958 04:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.958 04:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71031' 00:05:49.958 04:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 71031 00:05:49.958 04:56:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 71031 00:05:50.217 04:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71053 00:05:50.217 04:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:50.217 04:56:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.514 04:56:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71053 00:05:55.514 04:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 71053 ']' 00:05:55.514 04:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 71053 00:05:55.514 04:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:55.514 04:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.514 04:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71053 00:05:55.514 killing process with pid 71053 00:05:55.514 04:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.514 04:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.514 04:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71053' 00:05:55.514 04:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 71053 00:05:55.515 04:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 71053 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:55.774 ************************************ 00:05:55.774 END TEST skip_rpc_with_json 00:05:55.774 ************************************ 00:05:55.774 00:05:55.774 real 0m6.911s 00:05:55.774 user 0m6.634s 00:05:55.774 sys 0m0.628s 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 04:56:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.774 04:56:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:55.774 04:56:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.774 04:56:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.774 04:56:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 ************************************ 00:05:55.774 START TEST skip_rpc_with_delay 00:05:55.774 ************************************ 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.774 [2024-07-23 04:56:55.906940] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:55.774 [2024-07-23 04:56:55.907057] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:55.774 ************************************ 00:05:55.774 END TEST skip_rpc_with_delay 00:05:55.774 ************************************ 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.774 00:05:55.774 real 0m0.113s 00:05:55.774 user 0m0.077s 00:05:55.774 sys 0m0.034s 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.774 04:56:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 04:56:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.774 04:56:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:55.774 04:56:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:55.774 04:56:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:55.774 04:56:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.774 04:56:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.774 04:56:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.775 ************************************ 00:05:55.775 START TEST exit_on_failed_rpc_init 00:05:55.775 ************************************ 00:05:55.775 04:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:55.775 04:56:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71168 00:05:55.775 04:56:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71168 00:05:55.775 04:56:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.775 04:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 71168 ']' 00:05:55.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.775 04:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.775 04:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.775 04:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.775 04:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.775 04:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.034 [2024-07-23 04:56:56.062197] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:05:56.034 [2024-07-23 04:56:56.062492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71168 ] 00:05:56.034 [2024-07-23 04:56:56.200282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.294 [2024-07-23 04:56:56.268750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:56.862 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.122 [2024-07-23 04:56:57.109635] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:05:57.122 [2024-07-23 04:56:57.109722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71186 ] 00:05:57.122 [2024-07-23 04:56:57.248922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.382 [2024-07-23 04:56:57.343828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.382 [2024-07-23 04:56:57.343934] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:57.382 [2024-07-23 04:56:57.343953] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:57.382 [2024-07-23 04:56:57.343964] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71168 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 71168 ']' 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 71168 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71168 00:05:57.382 killing process with pid 71168 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71168' 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 71168 00:05:57.382 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 71168 00:05:57.642 ************************************ 00:05:57.642 END TEST exit_on_failed_rpc_init 00:05:57.642 ************************************ 00:05:57.642 00:05:57.642 real 0m1.834s 00:05:57.642 user 0m2.146s 00:05:57.642 sys 0m0.434s 00:05:57.642 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.642 04:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.642 04:56:57 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:57.642 04:56:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:57.642 00:05:57.642 real 0m14.524s 00:05:57.642 user 0m13.981s 00:05:57.642 sys 0m1.539s 00:05:57.642 04:56:57 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.642 04:56:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.642 ************************************ 00:05:57.642 END TEST skip_rpc 00:05:57.642 ************************************ 00:05:57.902 04:56:57 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.902 04:56:57 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:57.902 04:56:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.902 04:56:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.902 04:56:57 -- common/autotest_common.sh@10 -- # set +x 00:05:57.902 ************************************ 00:05:57.902 START TEST rpc_client 00:05:57.902 ************************************ 00:05:57.902 04:56:57 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:57.902 * Looking for test storage... 00:05:57.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:57.902 04:56:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:57.902 OK 00:05:57.902 04:56:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:57.902 00:05:57.902 real 0m0.107s 00:05:57.902 user 0m0.050s 00:05:57.902 sys 0m0.061s 00:05:57.902 ************************************ 00:05:57.902 END TEST rpc_client 00:05:57.902 ************************************ 00:05:57.902 04:56:58 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.902 04:56:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:57.902 04:56:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.902 04:56:58 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:57.902 04:56:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.902 04:56:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.902 04:56:58 -- common/autotest_common.sh@10 -- # set +x 00:05:57.902 ************************************ 00:05:57.902 START TEST json_config 00:05:57.902 ************************************ 00:05:57.902 04:56:58 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:57.902 04:56:58 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:57.902 04:56:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:57.902 04:56:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.902 04:56:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.902 04:56:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.902 04:56:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.902 04:56:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.902 04:56:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.902 04:56:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.902 04:56:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.162 04:56:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.162 04:56:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.162 04:56:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b62462d-2eeb-436d-9516-51c2e436d86a 00:05:58.162 04:56:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8b62462d-2eeb-436d-9516-51c2e436d86a 00:05:58.162 04:56:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.162 04:56:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.162 04:56:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.162 04:56:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.162 04:56:58 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.162 04:56:58 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.162 04:56:58 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.162 04:56:58 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.162 04:56:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.162 04:56:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.163 04:56:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.163 04:56:58 json_config -- paths/export.sh@5 -- # export PATH 00:05:58.163 04:56:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.163 04:56:58 json_config -- nvmf/common.sh@47 -- # : 0 00:05:58.163 04:56:58 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:58.163 04:56:58 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:58.163 04:56:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.163 04:56:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.163 04:56:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.163 04:56:58 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:58.163 04:56:58 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:58.163 04:56:58 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@11 -- # [[ 1 -eq 1 ]] 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:05:58.163 04:56:58 json_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:58.163 INFO: JSON configuration test init 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:58.163 04:56:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.163 04:56:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:58.163 04:56:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.163 04:56:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.163 04:56:58 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:58.163 04:56:58 json_config -- json_config/common.sh@9 -- # local app=target 00:05:58.163 04:56:58 json_config -- json_config/common.sh@10 -- # shift 00:05:58.163 04:56:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:58.163 04:56:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:58.163 04:56:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:58.163 04:56:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.163 04:56:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.163 04:56:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71304 00:05:58.163 Waiting for target to run... 00:05:58.163 04:56:58 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:58.163 04:56:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:58.163 04:56:58 json_config -- json_config/common.sh@25 -- # waitforlisten 71304 /var/tmp/spdk_tgt.sock 00:05:58.163 04:56:58 json_config -- common/autotest_common.sh@829 -- # '[' -z 71304 ']' 00:05:58.163 04:56:58 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.163 04:56:58 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.163 04:56:58 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.163 04:56:58 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.163 04:56:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.163 [2024-07-23 04:56:58.265761] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:05:58.163 [2024-07-23 04:56:58.266158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71304 ] 00:05:58.731 [2024-07-23 04:56:58.727690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.731 [2024-07-23 04:56:58.808442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.299 00:05:59.299 04:56:59 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.299 04:56:59 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:59.299 04:56:59 json_config -- json_config/common.sh@26 -- # echo '' 00:05:59.299 04:56:59 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:59.299 04:56:59 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:59.299 04:56:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.299 04:56:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.299 04:56:59 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:59.299 04:56:59 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:59.299 04:56:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.299 04:56:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.300 04:56:59 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:59.300 04:56:59 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:59.300 04:56:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:59.559 04:56:59 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:59.559 04:56:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:59.559 04:56:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.559 04:56:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.559 04:56:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:59.559 04:56:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:59.559 04:56:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:59.559 04:56:59 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:59.559 04:56:59 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:59.559 04:56:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@51 -- # sort 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:00.128 04:57:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.128 04:57:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@291 -- # create_iscsi_subsystem_config 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@225 -- # timing_enter create_iscsi_subsystem_config 00:06:00.128 04:57:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.128 04:57:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.128 04:57:00 json_config -- json_config/json_config.sh@226 -- # tgt_rpc bdev_malloc_create 64 1024 --name MallocForIscsi0 00:06:00.128 04:57:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 64 1024 --name MallocForIscsi0 00:06:00.388 MallocForIscsi0 00:06:00.388 04:57:00 json_config -- json_config/json_config.sh@227 -- # tgt_rpc iscsi_create_portal_group 1 127.0.0.1:3260 00:06:00.388 04:57:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_portal_group 1 127.0.0.1:3260 00:06:00.646 04:57:00 json_config -- json_config/json_config.sh@228 -- # tgt_rpc iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:06:00.647 04:57:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:06:00.905 04:57:00 json_config -- json_config/json_config.sh@229 -- # tgt_rpc iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:06:00.905 04:57:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:06:00.905 04:57:01 json_config -- json_config/json_config.sh@230 -- # timing_exit create_iscsi_subsystem_config 00:06:00.905 04:57:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.905 04:57:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.165 04:57:01 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:06:01.165 04:57:01 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:01.165 04:57:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.165 04:57:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.165 04:57:01 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:01.165 04:57:01 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:01.165 04:57:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:01.424 MallocBdevForConfigChangeCheck 00:06:01.424 04:57:01 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:01.424 04:57:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.424 04:57:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.424 04:57:01 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:01.424 04:57:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:01.683 INFO: shutting down applications... 00:06:01.683 04:57:01 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:01.683 04:57:01 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:01.683 04:57:01 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:01.683 04:57:01 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:01.683 04:57:01 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:02.252 Calling clear_iscsi_subsystem 00:06:02.252 Calling clear_nvmf_subsystem 00:06:02.252 Calling clear_nbd_subsystem 00:06:02.252 Calling clear_ublk_subsystem 00:06:02.252 Calling clear_vhost_blk_subsystem 00:06:02.252 Calling clear_vhost_scsi_subsystem 00:06:02.252 Calling clear_bdev_subsystem 00:06:02.252 04:57:02 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:02.252 04:57:02 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:02.252 04:57:02 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:02.252 04:57:02 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:02.252 04:57:02 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:02.252 04:57:02 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:02.545 04:57:02 json_config -- json_config/json_config.sh@349 -- # break 00:06:02.545 04:57:02 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:02.545 04:57:02 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:02.545 04:57:02 json_config -- json_config/common.sh@31 -- # local app=target 00:06:02.545 04:57:02 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:02.545 04:57:02 json_config -- json_config/common.sh@35 -- # [[ -n 71304 ]] 00:06:02.545 04:57:02 json_config -- json_config/common.sh@38 -- # kill -SIGINT 71304 00:06:02.545 04:57:02 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:02.545 04:57:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.545 04:57:02 json_config -- json_config/common.sh@41 -- # kill -0 71304 00:06:02.545 04:57:02 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:03.113 04:57:03 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:03.113 04:57:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.113 04:57:03 json_config -- json_config/common.sh@41 -- # kill -0 71304 00:06:03.113 04:57:03 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:03.113 04:57:03 json_config -- json_config/common.sh@43 -- # break 00:06:03.113 SPDK target shutdown done 00:06:03.113 04:57:03 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:03.113 04:57:03 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:03.113 INFO: relaunching applications... 00:06:03.113 04:57:03 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:03.113 04:57:03 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:03.113 04:57:03 json_config -- json_config/common.sh@9 -- # local app=target 00:06:03.113 04:57:03 json_config -- json_config/common.sh@10 -- # shift 00:06:03.113 04:57:03 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.113 04:57:03 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.113 04:57:03 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.113 04:57:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.113 04:57:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.113 Waiting for target to run... 00:06:03.113 04:57:03 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71482 00:06:03.113 04:57:03 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.113 04:57:03 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:03.113 04:57:03 json_config -- json_config/common.sh@25 -- # waitforlisten 71482 /var/tmp/spdk_tgt.sock 00:06:03.113 04:57:03 json_config -- common/autotest_common.sh@829 -- # '[' -z 71482 ']' 00:06:03.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.114 04:57:03 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.114 04:57:03 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.114 04:57:03 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.114 04:57:03 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.114 04:57:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.114 [2024-07-23 04:57:03.190127] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:03.114 [2024-07-23 04:57:03.190254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71482 ] 00:06:03.680 [2024-07-23 04:57:03.611766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.680 [2024-07-23 04:57:03.700106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.940 04:57:04 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.940 00:06:03.940 INFO: Checking if target configuration is the same... 00:06:03.940 04:57:04 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:03.940 04:57:04 json_config -- json_config/common.sh@26 -- # echo '' 00:06:03.940 04:57:04 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:03.940 04:57:04 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:03.940 04:57:04 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:03.940 04:57:04 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:03.940 04:57:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.940 + '[' 2 -ne 2 ']' 00:06:03.940 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:03.940 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:03.940 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:03.940 +++ basename /dev/fd/62 00:06:03.940 ++ mktemp /tmp/62.XXX 00:06:03.940 + tmp_file_1=/tmp/62.GBI 00:06:03.940 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:03.940 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:03.940 + tmp_file_2=/tmp/spdk_tgt_config.json.9Mf 00:06:03.940 + ret=0 00:06:03.940 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:04.507 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:04.507 + diff -u /tmp/62.GBI /tmp/spdk_tgt_config.json.9Mf 00:06:04.507 INFO: JSON config files are the same 00:06:04.507 + echo 'INFO: JSON config files are the same' 00:06:04.507 + rm /tmp/62.GBI /tmp/spdk_tgt_config.json.9Mf 00:06:04.507 + exit 0 00:06:04.508 INFO: changing configuration and checking if this can be detected... 00:06:04.508 04:57:04 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:04.508 04:57:04 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:04.508 04:57:04 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:04.508 04:57:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:04.767 04:57:04 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:04.767 04:57:04 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:04.767 04:57:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.767 + '[' 2 -ne 2 ']' 00:06:04.767 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:04.767 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:04.767 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:04.767 +++ basename /dev/fd/62 00:06:04.767 ++ mktemp /tmp/62.XXX 00:06:04.767 + tmp_file_1=/tmp/62.Mu6 00:06:04.767 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:04.767 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:04.767 + tmp_file_2=/tmp/spdk_tgt_config.json.wz2 00:06:04.767 + ret=0 00:06:04.767 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:05.026 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:05.026 + diff -u /tmp/62.Mu6 /tmp/spdk_tgt_config.json.wz2 00:06:05.026 + ret=1 00:06:05.026 + echo '=== Start of file: /tmp/62.Mu6 ===' 00:06:05.026 + cat /tmp/62.Mu6 00:06:05.026 + echo '=== End of file: /tmp/62.Mu6 ===' 00:06:05.026 + echo '' 00:06:05.026 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wz2 ===' 00:06:05.026 + cat /tmp/spdk_tgt_config.json.wz2 00:06:05.026 + echo '=== End of file: /tmp/spdk_tgt_config.json.wz2 ===' 00:06:05.026 + echo '' 00:06:05.026 + rm /tmp/62.Mu6 /tmp/spdk_tgt_config.json.wz2 00:06:05.026 + exit 1 00:06:05.026 INFO: configuration change detected. 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:05.026 04:57:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.026 04:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@321 -- # [[ -n 71482 ]] 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:05.026 04:57:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.026 04:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@201 -- # [[ 1 -eq 1 ]] 00:06:05.026 04:57:05 json_config -- json_config/json_config.sh@202 -- # rbd_cleanup 00:06:05.026 04:57:05 json_config -- common/autotest_common.sh@1031 -- # hash ceph 00:06:05.026 04:57:05 json_config -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:06:05.026 + base_dir=/var/tmp/ceph 00:06:05.026 + image=/var/tmp/ceph/ceph_raw.img 00:06:05.026 + dev=/dev/loop200 00:06:05.026 + pkill -9 ceph 00:06:05.285 + sleep 3 00:06:08.571 + umount /dev/loop200p2 00:06:08.571 umount: /dev/loop200p2: no mount point specified. 00:06:08.571 + losetup -d /dev/loop200 00:06:08.571 losetup: /dev/loop200: failed to use device: No such device 00:06:08.571 + rm -rf /var/tmp/ceph 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:06:08.571 04:57:08 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.571 04:57:08 json_config -- json_config/json_config.sh@327 -- # killprocess 71482 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@948 -- # '[' -z 71482 ']' 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@952 -- # kill -0 71482 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@953 -- # uname 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71482 00:06:08.571 killing process with pid 71482 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71482' 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@967 -- # kill 71482 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@972 -- # wait 71482 00:06:08.571 04:57:08 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:08.571 04:57:08 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.571 04:57:08 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:08.571 04:57:08 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:08.571 INFO: Success 00:06:08.571 ************************************ 00:06:08.571 END TEST json_config 00:06:08.571 ************************************ 00:06:08.571 00:06:08.571 real 0m10.637s 00:06:08.571 user 0m13.507s 00:06:08.571 sys 0m1.731s 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.571 04:57:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.571 04:57:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.571 04:57:08 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:08.571 04:57:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.571 04:57:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.571 04:57:08 -- common/autotest_common.sh@10 -- # set +x 00:06:08.571 ************************************ 00:06:08.571 START TEST json_config_extra_key 00:06:08.571 ************************************ 00:06:08.571 04:57:08 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b62462d-2eeb-436d-9516-51c2e436d86a 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8b62462d-2eeb-436d-9516-51c2e436d86a 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.831 04:57:08 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.831 04:57:08 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.831 04:57:08 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.831 04:57:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.831 04:57:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.831 04:57:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.831 04:57:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:08.831 04:57:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:08.831 04:57:08 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:08.831 INFO: launching applications... 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:08.831 04:57:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:08.831 04:57:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:08.831 04:57:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:08.831 04:57:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.831 04:57:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.831 04:57:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.831 Waiting for target to run... 00:06:08.831 04:57:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.831 04:57:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.831 04:57:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71666 00:06:08.831 04:57:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.831 04:57:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71666 /var/tmp/spdk_tgt.sock 00:06:08.831 04:57:08 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 71666 ']' 00:06:08.831 04:57:08 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:08.831 04:57:08 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.831 04:57:08 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.831 04:57:08 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.831 04:57:08 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.831 04:57:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:08.831 [2024-07-23 04:57:08.921918] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:08.831 [2024-07-23 04:57:08.922349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71666 ] 00:06:09.398 [2024-07-23 04:57:09.377662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.398 [2024-07-23 04:57:09.469671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.687 00:06:09.687 INFO: shutting down applications... 00:06:09.687 04:57:09 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.687 04:57:09 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:09.687 04:57:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:09.687 04:57:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:09.687 04:57:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:09.687 04:57:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:09.687 04:57:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:09.687 04:57:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71666 ]] 00:06:09.687 04:57:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71666 00:06:09.687 04:57:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:09.687 04:57:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.687 04:57:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71666 00:06:09.687 04:57:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.254 04:57:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.254 04:57:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.254 04:57:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71666 00:06:10.254 04:57:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.822 04:57:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.822 04:57:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.822 04:57:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71666 00:06:10.822 04:57:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:10.822 04:57:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:10.822 SPDK target shutdown done 00:06:10.822 04:57:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:10.822 04:57:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:10.822 Success 00:06:10.822 04:57:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:10.822 00:06:10.822 real 0m2.098s 00:06:10.822 user 0m1.528s 00:06:10.822 sys 0m0.478s 00:06:10.822 ************************************ 00:06:10.822 END TEST json_config_extra_key 00:06:10.822 ************************************ 00:06:10.822 04:57:10 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.822 04:57:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.822 04:57:10 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.822 04:57:10 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:10.822 04:57:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.822 04:57:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.822 04:57:10 -- common/autotest_common.sh@10 -- # set +x 00:06:10.822 ************************************ 00:06:10.822 START TEST alias_rpc 00:06:10.822 ************************************ 00:06:10.822 04:57:10 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:10.822 * Looking for test storage... 00:06:10.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:10.822 04:57:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:10.822 04:57:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71737 00:06:10.822 04:57:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:10.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.822 04:57:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71737 00:06:10.822 04:57:10 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 71737 ']' 00:06:10.822 04:57:10 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.822 04:57:10 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.822 04:57:10 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.822 04:57:10 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.822 04:57:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.081 [2024-07-23 04:57:11.069882] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:11.081 [2024-07-23 04:57:11.069998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71737 ] 00:06:11.081 [2024-07-23 04:57:11.205492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.081 [2024-07-23 04:57:11.290277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.017 04:57:11 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.017 04:57:11 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.017 04:57:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:12.276 04:57:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71737 00:06:12.276 04:57:12 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 71737 ']' 00:06:12.276 04:57:12 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 71737 00:06:12.276 04:57:12 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:12.276 04:57:12 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.276 04:57:12 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71737 00:06:12.276 killing process with pid 71737 00:06:12.276 04:57:12 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.276 04:57:12 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.276 04:57:12 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71737' 00:06:12.276 04:57:12 alias_rpc -- common/autotest_common.sh@967 -- # kill 71737 00:06:12.276 04:57:12 alias_rpc -- common/autotest_common.sh@972 -- # wait 71737 00:06:12.843 ************************************ 00:06:12.843 END TEST alias_rpc 00:06:12.843 ************************************ 00:06:12.843 00:06:12.843 real 0m1.912s 00:06:12.843 user 0m2.042s 00:06:12.843 sys 0m0.513s 00:06:12.843 04:57:12 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.843 04:57:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.843 04:57:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.843 04:57:12 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:12.844 04:57:12 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:12.844 04:57:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.844 04:57:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.844 04:57:12 -- common/autotest_common.sh@10 -- # set +x 00:06:12.844 ************************************ 00:06:12.844 START TEST spdkcli_tcp 00:06:12.844 ************************************ 00:06:12.844 04:57:12 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:12.844 * Looking for test storage... 00:06:12.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:12.844 04:57:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:12.844 04:57:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:12.844 04:57:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:12.844 04:57:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:12.844 04:57:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:12.844 04:57:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:12.844 04:57:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:12.844 04:57:12 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.844 04:57:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.844 04:57:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71813 00:06:12.844 04:57:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71813 00:06:12.844 04:57:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:12.844 04:57:12 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 71813 ']' 00:06:12.844 04:57:12 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.844 04:57:12 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.844 04:57:12 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.844 04:57:12 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.844 04:57:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.844 [2024-07-23 04:57:13.040150] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:12.844 [2024-07-23 04:57:13.040272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71813 ] 00:06:13.103 [2024-07-23 04:57:13.176475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.103 [2024-07-23 04:57:13.248647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.103 [2024-07-23 04:57:13.248670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.039 04:57:14 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.039 04:57:14 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:14.039 04:57:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71830 00:06:14.039 04:57:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:14.039 04:57:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:14.299 [ 00:06:14.299 "bdev_malloc_delete", 00:06:14.299 "bdev_malloc_create", 00:06:14.299 "bdev_null_resize", 00:06:14.299 "bdev_null_delete", 00:06:14.299 "bdev_null_create", 00:06:14.299 "bdev_nvme_cuse_unregister", 00:06:14.299 "bdev_nvme_cuse_register", 00:06:14.299 "bdev_opal_new_user", 00:06:14.299 "bdev_opal_set_lock_state", 00:06:14.299 "bdev_opal_delete", 00:06:14.299 "bdev_opal_get_info", 00:06:14.299 "bdev_opal_create", 00:06:14.299 "bdev_nvme_opal_revert", 00:06:14.299 "bdev_nvme_opal_init", 00:06:14.299 "bdev_nvme_send_cmd", 00:06:14.299 "bdev_nvme_get_path_iostat", 00:06:14.299 "bdev_nvme_get_mdns_discovery_info", 00:06:14.299 "bdev_nvme_stop_mdns_discovery", 00:06:14.299 "bdev_nvme_start_mdns_discovery", 00:06:14.299 "bdev_nvme_set_multipath_policy", 00:06:14.299 "bdev_nvme_set_preferred_path", 00:06:14.299 "bdev_nvme_get_io_paths", 00:06:14.299 "bdev_nvme_remove_error_injection", 00:06:14.299 "bdev_nvme_add_error_injection", 00:06:14.299 "bdev_nvme_get_discovery_info", 00:06:14.299 "bdev_nvme_stop_discovery", 00:06:14.299 "bdev_nvme_start_discovery", 00:06:14.299 "bdev_nvme_get_controller_health_info", 00:06:14.299 "bdev_nvme_disable_controller", 00:06:14.299 "bdev_nvme_enable_controller", 00:06:14.299 "bdev_nvme_reset_controller", 00:06:14.299 "bdev_nvme_get_transport_statistics", 00:06:14.299 "bdev_nvme_apply_firmware", 00:06:14.299 "bdev_nvme_detach_controller", 00:06:14.299 "bdev_nvme_get_controllers", 00:06:14.299 "bdev_nvme_attach_controller", 00:06:14.299 "bdev_nvme_set_hotplug", 00:06:14.299 "bdev_nvme_set_options", 00:06:14.299 "bdev_passthru_delete", 00:06:14.299 "bdev_passthru_create", 00:06:14.299 "bdev_lvol_set_parent_bdev", 00:06:14.299 "bdev_lvol_set_parent", 00:06:14.299 "bdev_lvol_check_shallow_copy", 00:06:14.299 "bdev_lvol_start_shallow_copy", 00:06:14.299 "bdev_lvol_grow_lvstore", 00:06:14.299 "bdev_lvol_get_lvols", 00:06:14.299 "bdev_lvol_get_lvstores", 00:06:14.299 "bdev_lvol_delete", 00:06:14.299 "bdev_lvol_set_read_only", 00:06:14.299 "bdev_lvol_resize", 00:06:14.299 "bdev_lvol_decouple_parent", 00:06:14.299 "bdev_lvol_inflate", 00:06:14.299 "bdev_lvol_rename", 00:06:14.299 "bdev_lvol_clone_bdev", 00:06:14.299 "bdev_lvol_clone", 00:06:14.299 "bdev_lvol_snapshot", 00:06:14.299 "bdev_lvol_create", 00:06:14.299 "bdev_lvol_delete_lvstore", 00:06:14.299 "bdev_lvol_rename_lvstore", 00:06:14.299 "bdev_lvol_create_lvstore", 00:06:14.299 "bdev_raid_set_options", 00:06:14.299 "bdev_raid_remove_base_bdev", 00:06:14.299 "bdev_raid_add_base_bdev", 00:06:14.299 "bdev_raid_delete", 00:06:14.299 "bdev_raid_create", 00:06:14.299 "bdev_raid_get_bdevs", 00:06:14.299 "bdev_error_inject_error", 00:06:14.299 "bdev_error_delete", 00:06:14.299 "bdev_error_create", 00:06:14.299 "bdev_split_delete", 00:06:14.299 "bdev_split_create", 00:06:14.299 "bdev_delay_delete", 00:06:14.299 "bdev_delay_create", 00:06:14.299 "bdev_delay_update_latency", 00:06:14.299 "bdev_zone_block_delete", 00:06:14.299 "bdev_zone_block_create", 00:06:14.299 "blobfs_create", 00:06:14.299 "blobfs_detect", 00:06:14.299 "blobfs_set_cache_size", 00:06:14.299 "bdev_aio_delete", 00:06:14.299 "bdev_aio_rescan", 00:06:14.299 "bdev_aio_create", 00:06:14.299 "bdev_ftl_set_property", 00:06:14.299 "bdev_ftl_get_properties", 00:06:14.299 "bdev_ftl_get_stats", 00:06:14.299 "bdev_ftl_unmap", 00:06:14.299 "bdev_ftl_unload", 00:06:14.299 "bdev_ftl_delete", 00:06:14.299 "bdev_ftl_load", 00:06:14.299 "bdev_ftl_create", 00:06:14.299 "bdev_virtio_attach_controller", 00:06:14.299 "bdev_virtio_scsi_get_devices", 00:06:14.299 "bdev_virtio_detach_controller", 00:06:14.299 "bdev_virtio_blk_set_hotplug", 00:06:14.299 "bdev_iscsi_delete", 00:06:14.299 "bdev_iscsi_create", 00:06:14.299 "bdev_iscsi_set_options", 00:06:14.299 "bdev_rbd_get_clusters_info", 00:06:14.299 "bdev_rbd_unregister_cluster", 00:06:14.299 "bdev_rbd_register_cluster", 00:06:14.299 "bdev_rbd_resize", 00:06:14.299 "bdev_rbd_delete", 00:06:14.299 "bdev_rbd_create", 00:06:14.299 "accel_error_inject_error", 00:06:14.299 "ioat_scan_accel_module", 00:06:14.299 "dsa_scan_accel_module", 00:06:14.299 "iaa_scan_accel_module", 00:06:14.299 "keyring_file_remove_key", 00:06:14.299 "keyring_file_add_key", 00:06:14.299 "keyring_linux_set_options", 00:06:14.299 "iscsi_get_histogram", 00:06:14.299 "iscsi_enable_histogram", 00:06:14.299 "iscsi_set_options", 00:06:14.299 "iscsi_get_auth_groups", 00:06:14.300 "iscsi_auth_group_remove_secret", 00:06:14.300 "iscsi_auth_group_add_secret", 00:06:14.300 "iscsi_delete_auth_group", 00:06:14.300 "iscsi_create_auth_group", 00:06:14.300 "iscsi_set_discovery_auth", 00:06:14.300 "iscsi_get_options", 00:06:14.300 "iscsi_target_node_request_logout", 00:06:14.300 "iscsi_target_node_set_redirect", 00:06:14.300 "iscsi_target_node_set_auth", 00:06:14.300 "iscsi_target_node_add_lun", 00:06:14.300 "iscsi_get_stats", 00:06:14.300 "iscsi_get_connections", 00:06:14.300 "iscsi_portal_group_set_auth", 00:06:14.300 "iscsi_start_portal_group", 00:06:14.300 "iscsi_delete_portal_group", 00:06:14.300 "iscsi_create_portal_group", 00:06:14.300 "iscsi_get_portal_groups", 00:06:14.300 "iscsi_delete_target_node", 00:06:14.300 "iscsi_target_node_remove_pg_ig_maps", 00:06:14.300 "iscsi_target_node_add_pg_ig_maps", 00:06:14.300 "iscsi_create_target_node", 00:06:14.300 "iscsi_get_target_nodes", 00:06:14.300 "iscsi_delete_initiator_group", 00:06:14.300 "iscsi_initiator_group_remove_initiators", 00:06:14.300 "iscsi_initiator_group_add_initiators", 00:06:14.300 "iscsi_create_initiator_group", 00:06:14.300 "iscsi_get_initiator_groups", 00:06:14.300 "nvmf_set_crdt", 00:06:14.300 "nvmf_set_config", 00:06:14.300 "nvmf_set_max_subsystems", 00:06:14.300 "nvmf_stop_mdns_prr", 00:06:14.300 "nvmf_publish_mdns_prr", 00:06:14.300 "nvmf_subsystem_get_listeners", 00:06:14.300 "nvmf_subsystem_get_qpairs", 00:06:14.300 "nvmf_subsystem_get_controllers", 00:06:14.300 "nvmf_get_stats", 00:06:14.300 "nvmf_get_transports", 00:06:14.300 "nvmf_create_transport", 00:06:14.300 "nvmf_get_targets", 00:06:14.300 "nvmf_delete_target", 00:06:14.300 "nvmf_create_target", 00:06:14.300 "nvmf_subsystem_allow_any_host", 00:06:14.300 "nvmf_subsystem_remove_host", 00:06:14.300 "nvmf_subsystem_add_host", 00:06:14.300 "nvmf_ns_remove_host", 00:06:14.300 "nvmf_ns_add_host", 00:06:14.300 "nvmf_subsystem_remove_ns", 00:06:14.300 "nvmf_subsystem_add_ns", 00:06:14.300 "nvmf_subsystem_listener_set_ana_state", 00:06:14.300 "nvmf_discovery_get_referrals", 00:06:14.300 "nvmf_discovery_remove_referral", 00:06:14.300 "nvmf_discovery_add_referral", 00:06:14.300 "nvmf_subsystem_remove_listener", 00:06:14.300 "nvmf_subsystem_add_listener", 00:06:14.300 "nvmf_delete_subsystem", 00:06:14.300 "nvmf_create_subsystem", 00:06:14.300 "nvmf_get_subsystems", 00:06:14.300 "env_dpdk_get_mem_stats", 00:06:14.300 "nbd_get_disks", 00:06:14.300 "nbd_stop_disk", 00:06:14.300 "nbd_start_disk", 00:06:14.300 "ublk_recover_disk", 00:06:14.300 "ublk_get_disks", 00:06:14.300 "ublk_stop_disk", 00:06:14.300 "ublk_start_disk", 00:06:14.300 "ublk_destroy_target", 00:06:14.300 "ublk_create_target", 00:06:14.300 "virtio_blk_create_transport", 00:06:14.300 "virtio_blk_get_transports", 00:06:14.300 "vhost_controller_set_coalescing", 00:06:14.300 "vhost_get_controllers", 00:06:14.300 "vhost_delete_controller", 00:06:14.300 "vhost_create_blk_controller", 00:06:14.300 "vhost_scsi_controller_remove_target", 00:06:14.300 "vhost_scsi_controller_add_target", 00:06:14.300 "vhost_start_scsi_controller", 00:06:14.300 "vhost_create_scsi_controller", 00:06:14.300 "thread_set_cpumask", 00:06:14.300 "framework_get_governor", 00:06:14.300 "framework_get_scheduler", 00:06:14.300 "framework_set_scheduler", 00:06:14.300 "framework_get_reactors", 00:06:14.300 "thread_get_io_channels", 00:06:14.300 "thread_get_pollers", 00:06:14.300 "thread_get_stats", 00:06:14.300 "framework_monitor_context_switch", 00:06:14.300 "spdk_kill_instance", 00:06:14.300 "log_enable_timestamps", 00:06:14.300 "log_get_flags", 00:06:14.300 "log_clear_flag", 00:06:14.300 "log_set_flag", 00:06:14.300 "log_get_level", 00:06:14.300 "log_set_level", 00:06:14.300 "log_get_print_level", 00:06:14.300 "log_set_print_level", 00:06:14.300 "framework_enable_cpumask_locks", 00:06:14.300 "framework_disable_cpumask_locks", 00:06:14.300 "framework_wait_init", 00:06:14.300 "framework_start_init", 00:06:14.300 "scsi_get_devices", 00:06:14.300 "bdev_get_histogram", 00:06:14.300 "bdev_enable_histogram", 00:06:14.300 "bdev_set_qos_limit", 00:06:14.300 "bdev_set_qd_sampling_period", 00:06:14.300 "bdev_get_bdevs", 00:06:14.300 "bdev_reset_iostat", 00:06:14.300 "bdev_get_iostat", 00:06:14.300 "bdev_examine", 00:06:14.300 "bdev_wait_for_examine", 00:06:14.300 "bdev_set_options", 00:06:14.300 "notify_get_notifications", 00:06:14.300 "notify_get_types", 00:06:14.300 "accel_get_stats", 00:06:14.300 "accel_set_options", 00:06:14.300 "accel_set_driver", 00:06:14.300 "accel_crypto_key_destroy", 00:06:14.300 "accel_crypto_keys_get", 00:06:14.300 "accel_crypto_key_create", 00:06:14.300 "accel_assign_opc", 00:06:14.300 "accel_get_module_info", 00:06:14.300 "accel_get_opc_assignments", 00:06:14.300 "vmd_rescan", 00:06:14.300 "vmd_remove_device", 00:06:14.300 "vmd_enable", 00:06:14.300 "sock_get_default_impl", 00:06:14.300 "sock_set_default_impl", 00:06:14.300 "sock_impl_set_options", 00:06:14.300 "sock_impl_get_options", 00:06:14.300 "iobuf_get_stats", 00:06:14.300 "iobuf_set_options", 00:06:14.300 "framework_get_pci_devices", 00:06:14.300 "framework_get_config", 00:06:14.300 "framework_get_subsystems", 00:06:14.300 "trace_get_info", 00:06:14.300 "trace_get_tpoint_group_mask", 00:06:14.300 "trace_disable_tpoint_group", 00:06:14.300 "trace_enable_tpoint_group", 00:06:14.300 "trace_clear_tpoint_mask", 00:06:14.300 "trace_set_tpoint_mask", 00:06:14.300 "keyring_get_keys", 00:06:14.300 "spdk_get_version", 00:06:14.300 "rpc_get_methods" 00:06:14.300 ] 00:06:14.300 04:57:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:14.300 04:57:14 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.300 04:57:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.300 04:57:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:14.300 04:57:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71813 00:06:14.300 04:57:14 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 71813 ']' 00:06:14.300 04:57:14 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 71813 00:06:14.300 04:57:14 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:14.300 04:57:14 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.300 04:57:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71813 00:06:14.300 killing process with pid 71813 00:06:14.300 04:57:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.300 04:57:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.300 04:57:14 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71813' 00:06:14.300 04:57:14 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 71813 00:06:14.300 04:57:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 71813 00:06:14.867 ************************************ 00:06:14.867 END TEST spdkcli_tcp 00:06:14.867 ************************************ 00:06:14.867 00:06:14.867 real 0m1.952s 00:06:14.867 user 0m3.647s 00:06:14.867 sys 0m0.531s 00:06:14.867 04:57:14 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.867 04:57:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.867 04:57:14 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.867 04:57:14 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:14.867 04:57:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.867 04:57:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.867 04:57:14 -- common/autotest_common.sh@10 -- # set +x 00:06:14.867 ************************************ 00:06:14.867 START TEST dpdk_mem_utility 00:06:14.867 ************************************ 00:06:14.867 04:57:14 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:14.867 * Looking for test storage... 00:06:14.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:14.867 04:57:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:14.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.867 04:57:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71904 00:06:14.867 04:57:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71904 00:06:14.867 04:57:14 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 71904 ']' 00:06:14.867 04:57:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.867 04:57:14 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.867 04:57:14 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.867 04:57:14 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.867 04:57:14 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.868 04:57:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.868 [2024-07-23 04:57:15.007491] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:14.868 [2024-07-23 04:57:15.007574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71904 ] 00:06:15.126 [2024-07-23 04:57:15.138875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.126 [2024-07-23 04:57:15.205472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.386 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.386 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:15.386 04:57:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:15.386 04:57:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:15.386 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.386 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.386 { 00:06:15.386 "filename": "/tmp/spdk_mem_dump.txt" 00:06:15.386 } 00:06:15.386 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.386 04:57:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:15.386 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:15.386 1 heaps totaling size 814.000000 MiB 00:06:15.386 size: 814.000000 MiB heap id: 0 00:06:15.386 end heaps---------- 00:06:15.386 8 mempools totaling size 598.116089 MiB 00:06:15.386 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:15.386 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:15.386 size: 84.521057 MiB name: bdev_io_71904 00:06:15.386 size: 51.011292 MiB name: evtpool_71904 00:06:15.386 size: 50.003479 MiB name: msgpool_71904 00:06:15.386 size: 21.763794 MiB name: PDU_Pool 00:06:15.386 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:15.386 size: 0.026123 MiB name: Session_Pool 00:06:15.386 end mempools------- 00:06:15.386 6 memzones totaling size 4.142822 MiB 00:06:15.386 size: 1.000366 MiB name: RG_ring_0_71904 00:06:15.386 size: 1.000366 MiB name: RG_ring_1_71904 00:06:15.386 size: 1.000366 MiB name: RG_ring_4_71904 00:06:15.386 size: 1.000366 MiB name: RG_ring_5_71904 00:06:15.386 size: 0.125366 MiB name: RG_ring_2_71904 00:06:15.386 size: 0.015991 MiB name: RG_ring_3_71904 00:06:15.386 end memzones------- 00:06:15.386 04:57:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:15.646 heap id: 0 total size: 814.000000 MiB number of busy elements: 296 number of free elements: 15 00:06:15.646 list of free elements. size: 12.472656 MiB 00:06:15.646 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:15.646 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:15.646 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:15.646 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:15.646 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:15.646 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:15.646 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:15.646 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:15.646 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:15.646 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:06:15.646 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:15.646 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:15.646 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:15.646 element at address: 0x200027e00000 with size: 0.396667 MiB 00:06:15.646 element at address: 0x200003a00000 with size: 0.348572 MiB 00:06:15.646 list of standard malloc elements. size: 199.264771 MiB 00:06:15.646 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:15.646 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:15.646 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:15.646 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:15.646 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:15.646 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:15.646 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:15.646 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:15.646 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:15.646 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:15.646 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:15.647 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:15.648 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e658c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e65980 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6c580 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:15.648 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:15.648 list of memzone associated elements. size: 602.262573 MiB 00:06:15.648 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:15.648 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:15.648 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:15.648 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:15.648 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:15.648 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_71904_0 00:06:15.648 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:15.648 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71904_0 00:06:15.648 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:15.648 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71904_0 00:06:15.648 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:15.648 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:15.648 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:15.648 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:15.648 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:15.648 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71904 00:06:15.648 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:15.648 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71904 00:06:15.649 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:15.649 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71904 00:06:15.649 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:15.649 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:15.649 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:15.649 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:15.649 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:15.649 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:15.649 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:15.649 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:15.649 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:15.649 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71904 00:06:15.649 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:15.649 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71904 00:06:15.649 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:15.649 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71904 00:06:15.649 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:15.649 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71904 00:06:15.649 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:15.649 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71904 00:06:15.649 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:15.649 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:15.649 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:15.649 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:15.649 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:15.649 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:15.649 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:15.649 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71904 00:06:15.649 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:15.649 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:15.649 element at address: 0x200027e65a40 with size: 0.023743 MiB 00:06:15.649 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:15.649 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:15.649 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71904 00:06:15.649 element at address: 0x200027e6bb80 with size: 0.002441 MiB 00:06:15.649 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:15.649 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:15.649 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71904 00:06:15.649 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:15.649 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71904 00:06:15.649 element at address: 0x200027e6c640 with size: 0.000305 MiB 00:06:15.649 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:15.649 04:57:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:15.649 04:57:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71904 00:06:15.649 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 71904 ']' 00:06:15.649 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 71904 00:06:15.649 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:15.649 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.649 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71904 00:06:15.649 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.649 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.649 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71904' 00:06:15.649 killing process with pid 71904 00:06:15.649 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 71904 00:06:15.649 04:57:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 71904 00:06:15.908 00:06:15.908 real 0m1.262s 00:06:15.908 user 0m1.171s 00:06:15.908 sys 0m0.425s 00:06:15.908 04:57:16 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.908 04:57:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.908 ************************************ 00:06:15.908 END TEST dpdk_mem_utility 00:06:15.908 ************************************ 00:06:16.167 04:57:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.167 04:57:16 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:16.167 04:57:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.167 04:57:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.167 04:57:16 -- common/autotest_common.sh@10 -- # set +x 00:06:16.167 ************************************ 00:06:16.167 START TEST event 00:06:16.167 ************************************ 00:06:16.167 04:57:16 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:16.167 * Looking for test storage... 00:06:16.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:16.167 04:57:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:16.167 04:57:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:16.167 04:57:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.167 04:57:16 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:16.167 04:57:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.167 04:57:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.167 ************************************ 00:06:16.167 START TEST event_perf 00:06:16.167 ************************************ 00:06:16.167 04:57:16 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.167 Running I/O for 1 seconds...[2024-07-23 04:57:16.289936] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:16.167 [2024-07-23 04:57:16.290035] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71972 ] 00:06:16.426 [2024-07-23 04:57:16.426433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.426 [2024-07-23 04:57:16.512842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.426 [2024-07-23 04:57:16.512987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.426 [2024-07-23 04:57:16.513108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.426 Running I/O for 1 seconds...[2024-07-23 04:57:16.513403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.374 00:06:17.374 lcore 0: 130218 00:06:17.374 lcore 1: 130218 00:06:17.374 lcore 2: 130218 00:06:17.374 lcore 3: 130218 00:06:17.374 done. 00:06:17.633 ************************************ 00:06:17.633 END TEST event_perf 00:06:17.633 ************************************ 00:06:17.633 00:06:17.633 real 0m1.321s 00:06:17.633 user 0m4.129s 00:06:17.633 sys 0m0.069s 00:06:17.633 04:57:17 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.633 04:57:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.633 04:57:17 event -- common/autotest_common.sh@1142 -- # return 0 00:06:17.633 04:57:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:17.633 04:57:17 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:17.633 04:57:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.633 04:57:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.633 ************************************ 00:06:17.633 START TEST event_reactor 00:06:17.633 ************************************ 00:06:17.633 04:57:17 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:17.633 [2024-07-23 04:57:17.665214] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:17.633 [2024-07-23 04:57:17.665360] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72011 ] 00:06:17.633 [2024-07-23 04:57:17.796589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.891 [2024-07-23 04:57:17.882071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.827 test_start 00:06:18.827 oneshot 00:06:18.827 tick 100 00:06:18.827 tick 100 00:06:18.827 tick 250 00:06:18.827 tick 100 00:06:18.827 tick 100 00:06:18.827 tick 100 00:06:18.827 tick 250 00:06:18.827 tick 500 00:06:18.827 tick 100 00:06:18.827 tick 100 00:06:18.827 tick 250 00:06:18.827 tick 100 00:06:18.827 tick 100 00:06:18.827 test_end 00:06:18.827 00:06:18.827 real 0m1.297s 00:06:18.827 user 0m1.127s 00:06:18.827 sys 0m0.062s 00:06:18.827 ************************************ 00:06:18.827 END TEST event_reactor 00:06:18.827 ************************************ 00:06:18.827 04:57:18 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.827 04:57:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:18.827 04:57:18 event -- common/autotest_common.sh@1142 -- # return 0 00:06:18.827 04:57:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:18.827 04:57:18 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:18.827 04:57:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.827 04:57:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.827 ************************************ 00:06:18.827 START TEST event_reactor_perf 00:06:18.827 ************************************ 00:06:18.827 04:57:18 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:18.827 [2024-07-23 04:57:19.018106] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:18.827 [2024-07-23 04:57:19.018201] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72046 ] 00:06:19.086 [2024-07-23 04:57:19.154546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.086 [2024-07-23 04:57:19.215628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.464 test_start 00:06:20.464 test_end 00:06:20.464 Performance: 426633 events per second 00:06:20.464 00:06:20.464 real 0m1.272s 00:06:20.464 user 0m1.109s 00:06:20.464 sys 0m0.057s 00:06:20.464 ************************************ 00:06:20.464 END TEST event_reactor_perf 00:06:20.464 04:57:20 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.464 04:57:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.464 ************************************ 00:06:20.464 04:57:20 event -- common/autotest_common.sh@1142 -- # return 0 00:06:20.464 04:57:20 event -- event/event.sh@49 -- # uname -s 00:06:20.464 04:57:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:20.464 04:57:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:20.464 04:57:20 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.464 04:57:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.464 04:57:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.464 ************************************ 00:06:20.464 START TEST event_scheduler 00:06:20.464 ************************************ 00:06:20.464 04:57:20 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:20.464 * Looking for test storage... 00:06:20.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:20.464 04:57:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:20.464 04:57:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72102 00:06:20.464 04:57:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:20.464 04:57:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.464 04:57:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72102 00:06:20.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.464 04:57:20 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 72102 ']' 00:06:20.464 04:57:20 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.464 04:57:20 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.464 04:57:20 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.464 04:57:20 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.464 04:57:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.464 [2024-07-23 04:57:20.487151] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:20.464 [2024-07-23 04:57:20.487241] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72102 ] 00:06:20.464 [2024-07-23 04:57:20.629256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.722 [2024-07-23 04:57:20.699908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.722 [2024-07-23 04:57:20.700017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.722 [2024-07-23 04:57:20.700155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.722 [2024-07-23 04:57:20.700161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.289 04:57:21 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.289 04:57:21 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:21.289 04:57:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:21.289 04:57:21 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.289 04:57:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.289 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.289 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.289 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.289 POWER: Cannot set governor of lcore 0 to performance 00:06:21.289 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.289 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.289 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:21.289 POWER: Unable to set Power Management Environment for lcore 0 00:06:21.289 [2024-07-23 04:57:21.378701] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:21.289 [2024-07-23 04:57:21.378714] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:21.289 [2024-07-23 04:57:21.378737] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:21.289 [2024-07-23 04:57:21.378752] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:21.289 [2024-07-23 04:57:21.378759] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:21.289 [2024-07-23 04:57:21.378765] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:21.289 04:57:21 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.289 04:57:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:21.289 04:57:21 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.289 04:57:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.289 [2024-07-23 04:57:21.465664] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:21.289 04:57:21 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.289 04:57:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:21.289 04:57:21 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.289 04:57:21 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.289 04:57:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.289 ************************************ 00:06:21.289 START TEST scheduler_create_thread 00:06:21.289 ************************************ 00:06:21.289 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:21.289 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:21.289 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.289 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.289 2 00:06:21.289 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.289 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:21.289 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.290 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.290 3 00:06:21.290 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.290 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:21.290 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.290 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.290 4 00:06:21.290 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.290 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:21.290 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.290 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.548 5 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.548 6 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.548 7 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.548 8 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.548 9 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.548 10 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.548 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:21.549 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:21.549 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.549 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.549 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.549 04:57:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:21.549 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.549 04:57:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.923 04:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.923 04:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:22.923 04:57:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:22.923 04:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.923 04:57:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.296 ************************************ 00:06:24.296 END TEST scheduler_create_thread 00:06:24.296 ************************************ 00:06:24.296 04:57:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.296 00:06:24.296 real 0m2.609s 00:06:24.296 user 0m0.017s 00:06:24.296 sys 0m0.006s 00:06:24.296 04:57:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.296 04:57:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.296 04:57:24 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:24.296 04:57:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:24.296 04:57:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72102 00:06:24.296 04:57:24 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 72102 ']' 00:06:24.296 04:57:24 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 72102 00:06:24.296 04:57:24 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:24.296 04:57:24 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.296 04:57:24 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72102 00:06:24.296 killing process with pid 72102 00:06:24.296 04:57:24 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:24.296 04:57:24 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:24.296 04:57:24 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72102' 00:06:24.296 04:57:24 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 72102 00:06:24.296 04:57:24 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 72102 00:06:24.554 [2024-07-23 04:57:24.568912] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:24.812 ************************************ 00:06:24.812 END TEST event_scheduler 00:06:24.812 ************************************ 00:06:24.812 00:06:24.812 real 0m4.462s 00:06:24.812 user 0m8.266s 00:06:24.812 sys 0m0.358s 00:06:24.812 04:57:24 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.812 04:57:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.812 04:57:24 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.812 04:57:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:24.812 04:57:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:24.813 04:57:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.813 04:57:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.813 04:57:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.813 ************************************ 00:06:24.813 START TEST app_repeat 00:06:24.813 ************************************ 00:06:24.813 04:57:24 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72202 00:06:24.813 Process app_repeat pid: 72202 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72202' 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.813 spdk_app_start Round 0 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:24.813 04:57:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72202 /var/tmp/spdk-nbd.sock 00:06:24.813 04:57:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72202 ']' 00:06:24.813 04:57:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.813 04:57:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.813 04:57:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.813 04:57:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.813 04:57:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.813 [2024-07-23 04:57:24.888085] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:24.813 [2024-07-23 04:57:24.888224] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72202 ] 00:06:25.177 [2024-07-23 04:57:25.035051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.177 [2024-07-23 04:57:25.148185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.177 [2024-07-23 04:57:25.148211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.748 04:57:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.748 04:57:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:25.748 04:57:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.007 Malloc0 00:06:26.007 04:57:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.266 Malloc1 00:06:26.266 04:57:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.266 04:57:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.525 /dev/nbd0 00:06:26.525 04:57:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.525 04:57:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.525 1+0 records in 00:06:26.525 1+0 records out 00:06:26.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021994 s, 18.6 MB/s 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:26.525 04:57:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:26.525 04:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.525 04:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.525 04:57:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.784 /dev/nbd1 00:06:26.784 04:57:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.784 04:57:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.784 1+0 records in 00:06:26.784 1+0 records out 00:06:26.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280156 s, 14.6 MB/s 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:26.784 04:57:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:26.784 04:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.784 04:57:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.784 04:57:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.784 04:57:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.784 04:57:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.043 { 00:06:27.043 "nbd_device": "/dev/nbd0", 00:06:27.043 "bdev_name": "Malloc0" 00:06:27.043 }, 00:06:27.043 { 00:06:27.043 "nbd_device": "/dev/nbd1", 00:06:27.043 "bdev_name": "Malloc1" 00:06:27.043 } 00:06:27.043 ]' 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.043 { 00:06:27.043 "nbd_device": "/dev/nbd0", 00:06:27.043 "bdev_name": "Malloc0" 00:06:27.043 }, 00:06:27.043 { 00:06:27.043 "nbd_device": "/dev/nbd1", 00:06:27.043 "bdev_name": "Malloc1" 00:06:27.043 } 00:06:27.043 ]' 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.043 /dev/nbd1' 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.043 /dev/nbd1' 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.043 256+0 records in 00:06:27.043 256+0 records out 00:06:27.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00680045 s, 154 MB/s 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.043 04:57:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.043 256+0 records in 00:06:27.043 256+0 records out 00:06:27.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235652 s, 44.5 MB/s 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.044 256+0 records in 00:06:27.044 256+0 records out 00:06:27.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235213 s, 44.6 MB/s 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.044 04:57:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.303 04:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.303 04:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.303 04:57:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.303 04:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.303 04:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.303 04:57:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.303 04:57:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.303 04:57:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.303 04:57:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.303 04:57:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.562 04:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.562 04:57:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.562 04:57:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.562 04:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.562 04:57:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.562 04:57:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.562 04:57:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.562 04:57:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.562 04:57:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.562 04:57:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.562 04:57:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.821 04:57:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.821 04:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.821 04:57:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.821 04:57:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.821 04:57:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.821 04:57:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.821 04:57:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.821 04:57:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.821 04:57:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.821 04:57:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.821 04:57:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.821 04:57:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.821 04:57:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.080 04:57:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.339 [2024-07-23 04:57:28.539474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.598 [2024-07-23 04:57:28.615091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.598 [2024-07-23 04:57:28.615113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.598 [2024-07-23 04:57:28.689247] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.598 [2024-07-23 04:57:28.689330] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.133 spdk_app_start Round 1 00:06:31.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.133 04:57:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.133 04:57:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:31.133 04:57:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72202 /var/tmp/spdk-nbd.sock 00:06:31.133 04:57:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72202 ']' 00:06:31.133 04:57:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.133 04:57:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.133 04:57:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.133 04:57:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.133 04:57:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.392 04:57:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.392 04:57:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:31.392 04:57:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.651 Malloc0 00:06:31.651 04:57:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.910 Malloc1 00:06:31.910 04:57:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.910 04:57:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.169 /dev/nbd0 00:06:32.169 04:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.169 04:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.169 1+0 records in 00:06:32.169 1+0 records out 00:06:32.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287292 s, 14.3 MB/s 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.169 04:57:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:32.169 04:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.169 04:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.169 04:57:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.428 /dev/nbd1 00:06:32.428 04:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.428 04:57:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.428 1+0 records in 00:06:32.428 1+0 records out 00:06:32.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382761 s, 10.7 MB/s 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.428 04:57:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:32.428 04:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.428 04:57:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.428 04:57:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.428 04:57:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.428 04:57:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.687 { 00:06:32.687 "nbd_device": "/dev/nbd0", 00:06:32.687 "bdev_name": "Malloc0" 00:06:32.687 }, 00:06:32.687 { 00:06:32.687 "nbd_device": "/dev/nbd1", 00:06:32.687 "bdev_name": "Malloc1" 00:06:32.687 } 00:06:32.687 ]' 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.687 { 00:06:32.687 "nbd_device": "/dev/nbd0", 00:06:32.687 "bdev_name": "Malloc0" 00:06:32.687 }, 00:06:32.687 { 00:06:32.687 "nbd_device": "/dev/nbd1", 00:06:32.687 "bdev_name": "Malloc1" 00:06:32.687 } 00:06:32.687 ]' 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.687 /dev/nbd1' 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.687 /dev/nbd1' 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.687 256+0 records in 00:06:32.687 256+0 records out 00:06:32.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470348 s, 223 MB/s 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.687 04:57:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.946 256+0 records in 00:06:32.946 256+0 records out 00:06:32.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245675 s, 42.7 MB/s 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.946 256+0 records in 00:06:32.946 256+0 records out 00:06:32.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277592 s, 37.8 MB/s 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.946 04:57:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.216 04:57:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.216 04:57:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.216 04:57:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.216 04:57:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.216 04:57:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.216 04:57:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.216 04:57:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.216 04:57:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.216 04:57:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.216 04:57:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.481 04:57:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.481 04:57:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.481 04:57:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.482 04:57:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.482 04:57:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.482 04:57:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.482 04:57:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.482 04:57:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.482 04:57:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.482 04:57:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.482 04:57:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.740 04:57:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.740 04:57:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.740 04:57:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.740 04:57:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.740 04:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.740 04:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.740 04:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.740 04:57:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.740 04:57:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.740 04:57:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.740 04:57:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.740 04:57:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.740 04:57:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.999 04:57:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.258 [2024-07-23 04:57:34.422195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.517 [2024-07-23 04:57:34.543662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.517 [2024-07-23 04:57:34.543704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.517 [2024-07-23 04:57:34.620817] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.517 [2024-07-23 04:57:34.620899] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.049 spdk_app_start Round 2 00:06:37.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.049 04:57:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.049 04:57:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:37.049 04:57:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72202 /var/tmp/spdk-nbd.sock 00:06:37.049 04:57:37 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72202 ']' 00:06:37.049 04:57:37 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.049 04:57:37 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.049 04:57:37 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.049 04:57:37 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.049 04:57:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.308 04:57:37 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.308 04:57:37 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:37.308 04:57:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.567 Malloc0 00:06:37.567 04:57:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.826 Malloc1 00:06:37.826 04:57:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.826 04:57:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.094 /dev/nbd0 00:06:38.094 04:57:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.094 04:57:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.094 1+0 records in 00:06:38.094 1+0 records out 00:06:38.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202141 s, 20.3 MB/s 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.094 04:57:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:38.094 04:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.094 04:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.094 04:57:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.369 /dev/nbd1 00:06:38.369 04:57:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.369 04:57:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.369 1+0 records in 00:06:38.369 1+0 records out 00:06:38.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430179 s, 9.5 MB/s 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.369 04:57:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:38.369 04:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.369 04:57:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.369 04:57:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.369 04:57:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.369 04:57:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.627 { 00:06:38.627 "nbd_device": "/dev/nbd0", 00:06:38.627 "bdev_name": "Malloc0" 00:06:38.627 }, 00:06:38.627 { 00:06:38.627 "nbd_device": "/dev/nbd1", 00:06:38.627 "bdev_name": "Malloc1" 00:06:38.627 } 00:06:38.627 ]' 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.627 { 00:06:38.627 "nbd_device": "/dev/nbd0", 00:06:38.627 "bdev_name": "Malloc0" 00:06:38.627 }, 00:06:38.627 { 00:06:38.627 "nbd_device": "/dev/nbd1", 00:06:38.627 "bdev_name": "Malloc1" 00:06:38.627 } 00:06:38.627 ]' 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.627 /dev/nbd1' 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.627 /dev/nbd1' 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.627 256+0 records in 00:06:38.627 256+0 records out 00:06:38.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104045 s, 101 MB/s 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.627 256+0 records in 00:06:38.627 256+0 records out 00:06:38.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02423 s, 43.3 MB/s 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.627 256+0 records in 00:06:38.627 256+0 records out 00:06:38.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241277 s, 43.5 MB/s 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.627 04:57:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.886 04:57:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.886 04:57:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.886 04:57:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.886 04:57:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.886 04:57:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.886 04:57:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.886 04:57:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.886 04:57:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.886 04:57:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.886 04:57:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.145 04:57:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.145 04:57:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.145 04:57:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.145 04:57:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.145 04:57:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.145 04:57:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.145 04:57:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.145 04:57:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.145 04:57:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.145 04:57:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.145 04:57:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.403 04:57:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.403 04:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.403 04:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.403 04:57:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.403 04:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.403 04:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.403 04:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.403 04:57:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.403 04:57:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.403 04:57:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.403 04:57:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.403 04:57:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.403 04:57:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.662 04:57:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:39.920 [2024-07-23 04:57:40.037814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.920 [2024-07-23 04:57:40.099550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.920 [2024-07-23 04:57:40.099562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.178 [2024-07-23 04:57:40.169268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.178 [2024-07-23 04:57:40.169371] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.711 04:57:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72202 /var/tmp/spdk-nbd.sock 00:06:42.711 04:57:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72202 ']' 00:06:42.712 04:57:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.712 04:57:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.712 04:57:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.712 04:57:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.712 04:57:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.970 04:57:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.970 04:57:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:42.970 04:57:43 event.app_repeat -- event/event.sh@39 -- # killprocess 72202 00:06:42.970 04:57:43 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 72202 ']' 00:06:42.970 04:57:43 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 72202 00:06:42.970 04:57:43 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:42.970 04:57:43 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.970 04:57:43 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72202 00:06:42.970 killing process with pid 72202 00:06:42.970 04:57:43 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.970 04:57:43 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.970 04:57:43 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72202' 00:06:42.970 04:57:43 event.app_repeat -- common/autotest_common.sh@967 -- # kill 72202 00:06:42.970 04:57:43 event.app_repeat -- common/autotest_common.sh@972 -- # wait 72202 00:06:43.228 spdk_app_start is called in Round 0. 00:06:43.228 Shutdown signal received, stop current app iteration 00:06:43.228 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 reinitialization... 00:06:43.228 spdk_app_start is called in Round 1. 00:06:43.228 Shutdown signal received, stop current app iteration 00:06:43.228 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 reinitialization... 00:06:43.228 spdk_app_start is called in Round 2. 00:06:43.228 Shutdown signal received, stop current app iteration 00:06:43.228 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 reinitialization... 00:06:43.228 spdk_app_start is called in Round 3. 00:06:43.228 Shutdown signal received, stop current app iteration 00:06:43.228 04:57:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:43.228 04:57:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:43.228 00:06:43.228 real 0m18.462s 00:06:43.228 user 0m40.881s 00:06:43.228 sys 0m2.815s 00:06:43.228 04:57:43 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.228 ************************************ 00:06:43.228 END TEST app_repeat 00:06:43.228 ************************************ 00:06:43.228 04:57:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.228 04:57:43 event -- common/autotest_common.sh@1142 -- # return 0 00:06:43.228 04:57:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:43.228 04:57:43 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:43.228 04:57:43 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.228 04:57:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.228 04:57:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.228 ************************************ 00:06:43.228 START TEST cpu_locks 00:06:43.228 ************************************ 00:06:43.228 04:57:43 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:43.228 * Looking for test storage... 00:06:43.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:43.487 04:57:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:43.487 04:57:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:43.487 04:57:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:43.487 04:57:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:43.487 04:57:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.487 04:57:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.487 04:57:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.487 ************************************ 00:06:43.487 START TEST default_locks 00:06:43.487 ************************************ 00:06:43.487 04:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:43.487 04:57:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72629 00:06:43.487 04:57:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72629 00:06:43.487 04:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 72629 ']' 00:06:43.487 04:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.487 04:57:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.487 04:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.487 04:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.487 04:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.487 04:57:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.487 [2024-07-23 04:57:43.547574] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:43.487 [2024-07-23 04:57:43.547678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72629 ] 00:06:43.487 [2024-07-23 04:57:43.683657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.745 [2024-07-23 04:57:43.757035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.312 04:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.312 04:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:44.312 04:57:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72629 00:06:44.312 04:57:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72629 00:06:44.312 04:57:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.880 04:57:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72629 00:06:44.880 04:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 72629 ']' 00:06:44.880 04:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 72629 00:06:44.880 04:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:44.880 04:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.880 04:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72629 00:06:44.880 killing process with pid 72629 00:06:44.880 04:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.880 04:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.880 04:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72629' 00:06:44.880 04:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 72629 00:06:44.880 04:57:44 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 72629 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72629 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 72629 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:45.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.448 ERROR: process (pid: 72629) is no longer running 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 72629 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 72629 ']' 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.448 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (72629) - No such process 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.448 00:06:45.448 real 0m1.938s 00:06:45.448 user 0m1.996s 00:06:45.448 sys 0m0.636s 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.448 04:57:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.448 ************************************ 00:06:45.449 END TEST default_locks 00:06:45.449 ************************************ 00:06:45.449 04:57:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:45.449 04:57:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:45.449 04:57:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.449 04:57:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.449 04:57:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.449 ************************************ 00:06:45.449 START TEST default_locks_via_rpc 00:06:45.449 ************************************ 00:06:45.449 04:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:45.449 04:57:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72681 00:06:45.449 04:57:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72681 00:06:45.449 04:57:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.449 04:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 72681 ']' 00:06:45.449 04:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.449 04:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.449 04:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.449 04:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.449 04:57:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.449 [2024-07-23 04:57:45.547589] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:45.449 [2024-07-23 04:57:45.547702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72681 ] 00:06:45.708 [2024-07-23 04:57:45.685819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.708 [2024-07-23 04:57:45.759988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.275 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.275 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:46.275 04:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:46.275 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.275 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.275 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.275 04:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:46.275 04:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:46.275 04:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:46.276 04:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:46.276 04:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.276 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.276 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.276 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.276 04:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72681 00:06:46.276 04:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72681 00:06:46.276 04:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.535 04:57:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72681 00:06:46.535 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 72681 ']' 00:06:46.535 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 72681 00:06:46.535 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:46.535 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.535 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72681 00:06:46.535 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.535 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.535 killing process with pid 72681 00:06:46.535 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72681' 00:06:46.535 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 72681 00:06:46.535 04:57:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 72681 00:06:47.103 ************************************ 00:06:47.103 END TEST default_locks_via_rpc 00:06:47.103 ************************************ 00:06:47.103 00:06:47.103 real 0m1.776s 00:06:47.103 user 0m1.758s 00:06:47.103 sys 0m0.563s 00:06:47.103 04:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.103 04:57:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.103 04:57:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:47.103 04:57:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:47.103 04:57:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.103 04:57:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.103 04:57:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.103 ************************************ 00:06:47.103 START TEST non_locking_app_on_locked_coremask 00:06:47.103 ************************************ 00:06:47.103 04:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:47.103 04:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72732 00:06:47.104 04:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.104 04:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72732 /var/tmp/spdk.sock 00:06:47.104 04:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72732 ']' 00:06:47.104 04:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.104 04:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.104 04:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.104 04:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.104 04:57:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.362 [2024-07-23 04:57:47.378030] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:47.363 [2024-07-23 04:57:47.378128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72732 ] 00:06:47.363 [2024-07-23 04:57:47.513886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.363 [2024-07-23 04:57:47.577567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.299 04:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.299 04:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:48.299 04:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:48.299 04:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72748 00:06:48.299 04:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72748 /var/tmp/spdk2.sock 00:06:48.299 04:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72748 ']' 00:06:48.299 04:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.299 04:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.299 04:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.299 04:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.299 04:57:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.299 [2024-07-23 04:57:48.367251] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:48.299 [2024-07-23 04:57:48.367360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72748 ] 00:06:48.299 [2024-07-23 04:57:48.500448] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.299 [2024-07-23 04:57:48.500532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.558 [2024-07-23 04:57:48.671497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.124 04:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.124 04:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:49.124 04:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72732 00:06:49.124 04:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72732 00:06:49.124 04:57:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.060 04:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72732 00:06:50.060 04:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72732 ']' 00:06:50.060 04:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 72732 00:06:50.060 04:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:50.060 04:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.060 04:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72732 00:06:50.060 04:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.060 killing process with pid 72732 00:06:50.060 04:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.060 04:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72732' 00:06:50.060 04:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 72732 00:06:50.060 04:57:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 72732 00:06:51.035 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72748 00:06:51.035 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72748 ']' 00:06:51.035 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 72748 00:06:51.035 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:51.035 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.035 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72748 00:06:51.035 killing process with pid 72748 00:06:51.035 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.035 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.035 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72748' 00:06:51.035 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 72748 00:06:51.035 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 72748 00:06:51.603 00:06:51.603 real 0m4.320s 00:06:51.603 user 0m4.528s 00:06:51.603 sys 0m1.264s 00:06:51.603 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.603 04:57:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.603 ************************************ 00:06:51.603 END TEST non_locking_app_on_locked_coremask 00:06:51.603 ************************************ 00:06:51.603 04:57:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:51.603 04:57:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:51.603 04:57:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.603 04:57:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.603 04:57:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.603 ************************************ 00:06:51.603 START TEST locking_app_on_unlocked_coremask 00:06:51.603 ************************************ 00:06:51.603 04:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:51.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.603 04:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72815 00:06:51.603 04:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72815 /var/tmp/spdk.sock 00:06:51.603 04:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:51.603 04:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72815 ']' 00:06:51.603 04:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.603 04:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.603 04:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.603 04:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.603 04:57:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.603 [2024-07-23 04:57:51.753491] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:51.603 [2024-07-23 04:57:51.753596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72815 ] 00:06:51.862 [2024-07-23 04:57:51.888099] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.862 [2024-07-23 04:57:51.888150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.862 [2024-07-23 04:57:51.957883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.799 04:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.799 04:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:52.799 04:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.799 04:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72833 00:06:52.799 04:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72833 /var/tmp/spdk2.sock 00:06:52.799 04:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72833 ']' 00:06:52.799 04:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.799 04:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.799 04:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.799 04:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.799 04:57:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.799 [2024-07-23 04:57:52.723191] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:52.799 [2024-07-23 04:57:52.723492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72833 ] 00:06:52.799 [2024-07-23 04:57:52.856615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.058 [2024-07-23 04:57:53.058033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.624 04:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.624 04:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:53.624 04:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72833 00:06:53.624 04:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72833 00:06:53.624 04:57:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.560 04:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72815 00:06:54.560 04:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72815 ']' 00:06:54.560 04:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 72815 00:06:54.560 04:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:54.560 04:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.560 04:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72815 00:06:54.560 04:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.560 04:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.560 killing process with pid 72815 00:06:54.560 04:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72815' 00:06:54.560 04:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 72815 00:06:54.560 04:57:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 72815 00:06:55.497 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72833 00:06:55.497 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72833 ']' 00:06:55.497 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 72833 00:06:55.497 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:55.497 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.497 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72833 00:06:55.497 killing process with pid 72833 00:06:55.497 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.497 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.497 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72833' 00:06:55.497 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 72833 00:06:55.497 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 72833 00:06:55.755 ************************************ 00:06:55.755 END TEST locking_app_on_unlocked_coremask 00:06:55.755 ************************************ 00:06:55.755 00:06:55.755 real 0m4.264s 00:06:55.755 user 0m4.550s 00:06:55.755 sys 0m1.221s 00:06:55.755 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.755 04:57:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.755 04:57:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.755 04:57:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:55.755 04:57:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.755 04:57:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.755 04:57:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.014 ************************************ 00:06:56.014 START TEST locking_app_on_locked_coremask 00:06:56.014 ************************************ 00:06:56.014 04:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:56.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.014 04:57:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72900 00:06:56.014 04:57:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72900 /var/tmp/spdk.sock 00:06:56.014 04:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72900 ']' 00:06:56.014 04:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.014 04:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.014 04:57:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.014 04:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.014 04:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.014 04:57:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.014 [2024-07-23 04:57:56.068157] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:56.014 [2024-07-23 04:57:56.068264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72900 ] 00:06:56.014 [2024-07-23 04:57:56.202035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.273 [2024-07-23 04:57:56.265448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.840 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.840 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72916 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72916 /var/tmp/spdk2.sock 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 72916 /var/tmp/spdk2.sock 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:56.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 72916 /var/tmp/spdk2.sock 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72916 ']' 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.841 04:57:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.100 [2024-07-23 04:57:57.065063] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:57.100 [2024-07-23 04:57:57.065161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72916 ] 00:06:57.100 [2024-07-23 04:57:57.205285] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72900 has claimed it. 00:06:57.100 [2024-07-23 04:57:57.209379] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:57.667 ERROR: process (pid: 72916) is no longer running 00:06:57.667 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (72916) - No such process 00:06:57.667 04:57:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.667 04:57:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:57.667 04:57:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:57.667 04:57:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.667 04:57:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.667 04:57:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.667 04:57:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72900 00:06:57.667 04:57:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72900 00:06:57.667 04:57:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.926 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72900 00:06:57.926 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72900 ']' 00:06:57.926 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 72900 00:06:57.926 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:57.926 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.926 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72900 00:06:58.185 killing process with pid 72900 00:06:58.185 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:58.185 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:58.185 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72900' 00:06:58.185 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 72900 00:06:58.185 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 72900 00:06:58.444 00:06:58.444 real 0m2.661s 00:06:58.444 user 0m2.995s 00:06:58.444 sys 0m0.682s 00:06:58.444 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.444 04:57:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.444 ************************************ 00:06:58.444 END TEST locking_app_on_locked_coremask 00:06:58.444 ************************************ 00:06:58.703 04:57:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:58.703 04:57:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:58.703 04:57:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.703 04:57:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.703 04:57:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.703 ************************************ 00:06:58.703 START TEST locking_overlapped_coremask 00:06:58.703 ************************************ 00:06:58.703 04:57:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:58.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.703 04:57:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72967 00:06:58.703 04:57:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:58.703 04:57:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72967 /var/tmp/spdk.sock 00:06:58.703 04:57:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 72967 ']' 00:06:58.703 04:57:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.703 04:57:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.703 04:57:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.703 04:57:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.703 04:57:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.703 [2024-07-23 04:57:58.764688] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:58.703 [2024-07-23 04:57:58.764771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72967 ] 00:06:58.703 [2024-07-23 04:57:58.893602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.962 [2024-07-23 04:57:58.964949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.962 [2024-07-23 04:57:58.965023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.962 [2024-07-23 04:57:58.965030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72985 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72985 /var/tmp/spdk2.sock 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 72985 /var/tmp/spdk2.sock 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 72985 /var/tmp/spdk2.sock 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 72985 ']' 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.530 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.789 04:57:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.789 [2024-07-23 04:57:59.812251] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:06:59.789 [2024-07-23 04:57:59.812347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72985 ] 00:06:59.789 [2024-07-23 04:57:59.953538] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72967 has claimed it. 00:06:59.789 [2024-07-23 04:57:59.953687] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.358 ERROR: process (pid: 72985) is no longer running 00:07:00.358 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (72985) - No such process 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72967 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 72967 ']' 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 72967 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72967 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.358 killing process with pid 72967 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72967' 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 72967 00:07:00.358 04:58:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 72967 00:07:00.926 00:07:00.926 real 0m2.330s 00:07:00.926 user 0m6.520s 00:07:00.926 sys 0m0.496s 00:07:00.926 04:58:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.926 ************************************ 00:07:00.926 END TEST locking_overlapped_coremask 00:07:00.926 ************************************ 00:07:00.926 04:58:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.926 04:58:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:00.926 04:58:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:00.926 04:58:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.926 04:58:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.926 04:58:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.926 ************************************ 00:07:00.926 START TEST locking_overlapped_coremask_via_rpc 00:07:00.926 ************************************ 00:07:00.926 04:58:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:00.926 04:58:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=73025 00:07:00.926 04:58:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 73025 /var/tmp/spdk.sock 00:07:00.926 04:58:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73025 ']' 00:07:00.926 04:58:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:00.926 04:58:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.926 04:58:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.926 04:58:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.926 04:58:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.926 04:58:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.184 [2024-07-23 04:58:01.146971] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:01.184 [2024-07-23 04:58:01.147539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73025 ] 00:07:01.184 [2024-07-23 04:58:01.279461] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.184 [2024-07-23 04:58:01.279513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.184 [2024-07-23 04:58:01.351189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.184 [2024-07-23 04:58:01.351361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.184 [2024-07-23 04:58:01.351363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.119 04:58:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.119 04:58:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:02.119 04:58:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:02.119 04:58:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=73043 00:07:02.119 04:58:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 73043 /var/tmp/spdk2.sock 00:07:02.119 04:58:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73043 ']' 00:07:02.119 04:58:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.119 04:58:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.119 04:58:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.119 04:58:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.119 04:58:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.119 [2024-07-23 04:58:02.164769] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:02.119 [2024-07-23 04:58:02.164858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73043 ] 00:07:02.119 [2024-07-23 04:58:02.305991] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.119 [2024-07-23 04:58:02.306025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.377 [2024-07-23 04:58:02.453445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.377 [2024-07-23 04:58:02.453567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.377 [2024-07-23 04:58:02.453568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.945 [2024-07-23 04:58:03.054525] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73025 has claimed it. 00:07:02.945 request: 00:07:02.945 { 00:07:02.945 "method": "framework_enable_cpumask_locks", 00:07:02.945 "req_id": 1 00:07:02.945 } 00:07:02.945 Got JSON-RPC error response 00:07:02.945 response: 00:07:02.945 { 00:07:02.945 "code": -32603, 00:07:02.945 "message": "Failed to claim CPU core: 2" 00:07:02.945 } 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 73025 /var/tmp/spdk.sock 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73025 ']' 00:07:02.945 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.946 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.946 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.946 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.946 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.212 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.212 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:03.212 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 73043 /var/tmp/spdk2.sock 00:07:03.212 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73043 ']' 00:07:03.212 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.212 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.212 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.212 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.212 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.471 ************************************ 00:07:03.471 END TEST locking_overlapped_coremask_via_rpc 00:07:03.471 ************************************ 00:07:03.471 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.471 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:03.471 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:03.471 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:03.471 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:03.471 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:03.471 00:07:03.471 real 0m2.427s 00:07:03.471 user 0m1.160s 00:07:03.471 sys 0m0.199s 00:07:03.471 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.471 04:58:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.471 04:58:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:03.471 04:58:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:03.471 04:58:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73025 ]] 00:07:03.471 04:58:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73025 00:07:03.471 04:58:03 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 73025 ']' 00:07:03.471 04:58:03 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 73025 00:07:03.471 04:58:03 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:03.471 04:58:03 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.471 04:58:03 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73025 00:07:03.471 killing process with pid 73025 00:07:03.471 04:58:03 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.471 04:58:03 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.471 04:58:03 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73025' 00:07:03.471 04:58:03 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 73025 00:07:03.471 04:58:03 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 73025 00:07:04.039 04:58:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73043 ]] 00:07:04.039 04:58:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73043 00:07:04.039 04:58:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 73043 ']' 00:07:04.039 04:58:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 73043 00:07:04.039 04:58:04 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:04.039 04:58:04 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.039 04:58:04 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73043 00:07:04.039 killing process with pid 73043 00:07:04.039 04:58:04 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:04.039 04:58:04 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:04.039 04:58:04 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73043' 00:07:04.039 04:58:04 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 73043 00:07:04.039 04:58:04 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 73043 00:07:04.607 04:58:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:04.607 04:58:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:04.607 04:58:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73025 ]] 00:07:04.607 04:58:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73025 00:07:04.607 04:58:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 73025 ']' 00:07:04.607 04:58:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 73025 00:07:04.607 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (73025) - No such process 00:07:04.607 Process with pid 73025 is not found 00:07:04.607 Process with pid 73043 is not found 00:07:04.607 04:58:04 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 73025 is not found' 00:07:04.607 04:58:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73043 ]] 00:07:04.607 04:58:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73043 00:07:04.607 04:58:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 73043 ']' 00:07:04.607 04:58:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 73043 00:07:04.607 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (73043) - No such process 00:07:04.607 04:58:04 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 73043 is not found' 00:07:04.607 04:58:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:04.607 ************************************ 00:07:04.607 END TEST cpu_locks 00:07:04.607 ************************************ 00:07:04.607 00:07:04.607 real 0m21.232s 00:07:04.607 user 0m35.868s 00:07:04.607 sys 0m5.986s 00:07:04.607 04:58:04 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.607 04:58:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.607 04:58:04 event -- common/autotest_common.sh@1142 -- # return 0 00:07:04.607 ************************************ 00:07:04.607 END TEST event 00:07:04.607 ************************************ 00:07:04.607 00:07:04.607 real 0m48.462s 00:07:04.607 user 1m31.520s 00:07:04.607 sys 0m9.594s 00:07:04.607 04:58:04 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.607 04:58:04 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.607 04:58:04 -- common/autotest_common.sh@1142 -- # return 0 00:07:04.607 04:58:04 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:04.607 04:58:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.607 04:58:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.607 04:58:04 -- common/autotest_common.sh@10 -- # set +x 00:07:04.607 ************************************ 00:07:04.607 START TEST thread 00:07:04.607 ************************************ 00:07:04.607 04:58:04 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:04.607 * Looking for test storage... 00:07:04.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:04.607 04:58:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:04.607 04:58:04 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:04.607 04:58:04 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.607 04:58:04 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.607 ************************************ 00:07:04.607 START TEST thread_poller_perf 00:07:04.607 ************************************ 00:07:04.607 04:58:04 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:04.607 [2024-07-23 04:58:04.785063] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:04.607 [2024-07-23 04:58:04.785164] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73171 ] 00:07:04.866 [2024-07-23 04:58:04.919917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.866 [2024-07-23 04:58:04.986224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.866 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:06.241 ====================================== 00:07:06.241 busy:2206602338 (cyc) 00:07:06.241 total_run_count: 396000 00:07:06.241 tsc_hz: 2200000000 (cyc) 00:07:06.241 ====================================== 00:07:06.241 poller_cost: 5572 (cyc), 2532 (nsec) 00:07:06.241 00:07:06.241 real 0m1.293s 00:07:06.241 user 0m1.126s 00:07:06.241 sys 0m0.059s 00:07:06.241 04:58:06 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.241 04:58:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.241 ************************************ 00:07:06.241 END TEST thread_poller_perf 00:07:06.241 ************************************ 00:07:06.241 04:58:06 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:06.242 04:58:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:06.242 04:58:06 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:06.242 04:58:06 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.242 04:58:06 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.242 ************************************ 00:07:06.242 START TEST thread_poller_perf 00:07:06.242 ************************************ 00:07:06.242 04:58:06 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:06.242 [2024-07-23 04:58:06.133983] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:06.242 [2024-07-23 04:58:06.134295] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73203 ] 00:07:06.242 [2024-07-23 04:58:06.265085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.242 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:06.242 [2024-07-23 04:58:06.339527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.616 ====================================== 00:07:07.616 busy:2201947952 (cyc) 00:07:07.616 total_run_count: 5234000 00:07:07.616 tsc_hz: 2200000000 (cyc) 00:07:07.616 ====================================== 00:07:07.616 poller_cost: 420 (cyc), 190 (nsec) 00:07:07.616 ************************************ 00:07:07.616 END TEST thread_poller_perf 00:07:07.616 ************************************ 00:07:07.616 00:07:07.616 real 0m1.298s 00:07:07.616 user 0m1.130s 00:07:07.616 sys 0m0.061s 00:07:07.616 04:58:07 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.616 04:58:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.616 04:58:07 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:07.616 04:58:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:07.616 ************************************ 00:07:07.616 END TEST thread 00:07:07.616 ************************************ 00:07:07.616 00:07:07.616 real 0m2.767s 00:07:07.616 user 0m2.322s 00:07:07.616 sys 0m0.220s 00:07:07.616 04:58:07 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.616 04:58:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.616 04:58:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:07.616 04:58:07 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:07.616 04:58:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.616 04:58:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.616 04:58:07 -- common/autotest_common.sh@10 -- # set +x 00:07:07.616 ************************************ 00:07:07.616 START TEST accel 00:07:07.616 ************************************ 00:07:07.616 04:58:07 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:07.616 * Looking for test storage... 00:07:07.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:07.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.616 04:58:07 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:07.617 04:58:07 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:07.617 04:58:07 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:07.617 04:58:07 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=73278 00:07:07.617 04:58:07 accel -- accel/accel.sh@63 -- # waitforlisten 73278 00:07:07.617 04:58:07 accel -- common/autotest_common.sh@829 -- # '[' -z 73278 ']' 00:07:07.617 04:58:07 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:07.617 04:58:07 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.617 04:58:07 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.617 04:58:07 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.617 04:58:07 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:07.617 04:58:07 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.617 04:58:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.617 04:58:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.617 04:58:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.617 04:58:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.617 04:58:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.617 04:58:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.617 04:58:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:07.617 04:58:07 accel -- accel/accel.sh@41 -- # jq -r . 00:07:07.617 [2024-07-23 04:58:07.651910] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:07.617 [2024-07-23 04:58:07.652153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73278 ] 00:07:07.617 [2024-07-23 04:58:07.779918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.903 [2024-07-23 04:58:07.851858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.470 04:58:08 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.470 04:58:08 accel -- common/autotest_common.sh@862 -- # return 0 00:07:08.470 04:58:08 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:08.470 04:58:08 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:08.470 04:58:08 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:08.470 04:58:08 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:08.470 04:58:08 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:08.470 04:58:08 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:08.470 04:58:08 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:08.470 04:58:08 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.470 04:58:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.470 04:58:08 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # IFS== 00:07:08.470 04:58:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:08.470 04:58:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:08.470 04:58:08 accel -- accel/accel.sh@75 -- # killprocess 73278 00:07:08.470 04:58:08 accel -- common/autotest_common.sh@948 -- # '[' -z 73278 ']' 00:07:08.470 04:58:08 accel -- common/autotest_common.sh@952 -- # kill -0 73278 00:07:08.470 04:58:08 accel -- common/autotest_common.sh@953 -- # uname 00:07:08.470 04:58:08 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.470 04:58:08 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73278 00:07:08.729 04:58:08 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.729 04:58:08 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.729 04:58:08 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73278' 00:07:08.729 killing process with pid 73278 00:07:08.729 04:58:08 accel -- common/autotest_common.sh@967 -- # kill 73278 00:07:08.729 04:58:08 accel -- common/autotest_common.sh@972 -- # wait 73278 00:07:08.987 04:58:09 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:08.987 04:58:09 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:08.987 04:58:09 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:08.987 04:58:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.987 04:58:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.987 04:58:09 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:08.987 04:58:09 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:08.987 04:58:09 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:08.987 04:58:09 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.987 04:58:09 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.987 04:58:09 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.987 04:58:09 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.987 04:58:09 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.987 04:58:09 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:08.987 04:58:09 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:08.987 04:58:09 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.987 04:58:09 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:08.987 04:58:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.987 04:58:09 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:08.987 04:58:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:08.987 04:58:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.987 04:58:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.987 ************************************ 00:07:08.987 START TEST accel_missing_filename 00:07:08.987 ************************************ 00:07:08.987 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:08.987 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:08.987 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:08.987 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:08.987 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.988 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:08.988 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.988 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:08.988 04:58:09 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:08.988 04:58:09 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:08.988 04:58:09 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.988 04:58:09 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.988 04:58:09 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.988 04:58:09 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.988 04:58:09 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.988 04:58:09 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:08.988 04:58:09 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:08.988 [2024-07-23 04:58:09.163037] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:08.988 [2024-07-23 04:58:09.163128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73329 ] 00:07:09.246 [2024-07-23 04:58:09.298718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.246 [2024-07-23 04:58:09.358845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.246 [2024-07-23 04:58:09.411272] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.505 [2024-07-23 04:58:09.488143] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:09.505 A filename is required. 00:07:09.505 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:09.505 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.505 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:09.505 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:09.505 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:09.505 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.505 00:07:09.505 real 0m0.406s 00:07:09.505 user 0m0.246s 00:07:09.505 sys 0m0.106s 00:07:09.505 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.505 04:58:09 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:09.505 ************************************ 00:07:09.505 END TEST accel_missing_filename 00:07:09.505 ************************************ 00:07:09.505 04:58:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.505 04:58:09 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.505 04:58:09 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:09.505 04:58:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.505 04:58:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.505 ************************************ 00:07:09.505 START TEST accel_compress_verify 00:07:09.505 ************************************ 00:07:09.505 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.505 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:09.505 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.505 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:09.505 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.505 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:09.505 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.505 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.505 04:58:09 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:09.505 04:58:09 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:09.505 04:58:09 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.505 04:58:09 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.505 04:58:09 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.505 04:58:09 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.505 04:58:09 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.505 04:58:09 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:09.505 04:58:09 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:09.505 [2024-07-23 04:58:09.620494] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:09.505 [2024-07-23 04:58:09.620583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73354 ] 00:07:09.764 [2024-07-23 04:58:09.752357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.764 [2024-07-23 04:58:09.805543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.764 [2024-07-23 04:58:09.857775] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:09.764 [2024-07-23 04:58:09.934072] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:10.023 00:07:10.023 Compression does not support the verify option, aborting. 00:07:10.023 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:10.023 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.023 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:10.023 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:10.023 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:10.023 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.023 00:07:10.023 real 0m0.394s 00:07:10.023 user 0m0.233s 00:07:10.023 sys 0m0.108s 00:07:10.023 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.023 04:58:09 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:10.023 ************************************ 00:07:10.023 END TEST accel_compress_verify 00:07:10.023 ************************************ 00:07:10.023 04:58:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.023 04:58:10 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:10.023 04:58:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.023 04:58:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.023 04:58:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.023 ************************************ 00:07:10.023 START TEST accel_wrong_workload 00:07:10.023 ************************************ 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:10.023 04:58:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:10.023 04:58:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:10.023 04:58:10 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.023 04:58:10 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.023 04:58:10 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.023 04:58:10 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.023 04:58:10 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.023 04:58:10 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:10.023 04:58:10 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:10.023 Unsupported workload type: foobar 00:07:10.023 [2024-07-23 04:58:10.062514] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:10.023 accel_perf options: 00:07:10.023 [-h help message] 00:07:10.023 [-q queue depth per core] 00:07:10.023 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:10.023 [-T number of threads per core 00:07:10.023 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:10.023 [-t time in seconds] 00:07:10.023 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:10.023 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:10.023 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:10.023 [-l for compress/decompress workloads, name of uncompressed input file 00:07:10.023 [-S for crc32c workload, use this seed value (default 0) 00:07:10.023 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:10.023 [-f for fill workload, use this BYTE value (default 255) 00:07:10.023 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:10.023 [-y verify result if this switch is on] 00:07:10.023 [-a tasks to allocate per core (default: same value as -q)] 00:07:10.023 Can be used to spread operations across a wider range of memory. 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.023 ************************************ 00:07:10.023 END TEST accel_wrong_workload 00:07:10.023 ************************************ 00:07:10.023 00:07:10.023 real 0m0.027s 00:07:10.023 user 0m0.014s 00:07:10.023 sys 0m0.012s 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.023 04:58:10 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:10.023 04:58:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.023 04:58:10 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:10.023 04:58:10 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:10.023 04:58:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.023 04:58:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.023 ************************************ 00:07:10.023 START TEST accel_negative_buffers 00:07:10.023 ************************************ 00:07:10.023 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:10.023 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:10.023 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:10.024 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:10.024 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.024 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:10.024 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.024 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:10.024 04:58:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:10.024 04:58:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:10.024 04:58:10 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.024 04:58:10 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.024 04:58:10 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.024 04:58:10 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.024 04:58:10 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.024 04:58:10 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:10.024 04:58:10 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:10.024 -x option must be non-negative. 00:07:10.024 [2024-07-23 04:58:10.139841] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:10.024 accel_perf options: 00:07:10.024 [-h help message] 00:07:10.024 [-q queue depth per core] 00:07:10.024 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:10.024 [-T number of threads per core 00:07:10.024 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:10.024 [-t time in seconds] 00:07:10.024 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:10.024 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:10.024 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:10.024 [-l for compress/decompress workloads, name of uncompressed input file 00:07:10.024 [-S for crc32c workload, use this seed value (default 0) 00:07:10.024 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:10.024 [-f for fill workload, use this BYTE value (default 255) 00:07:10.024 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:10.024 [-y verify result if this switch is on] 00:07:10.024 [-a tasks to allocate per core (default: same value as -q)] 00:07:10.024 Can be used to spread operations across a wider range of memory. 00:07:10.024 ************************************ 00:07:10.024 END TEST accel_negative_buffers 00:07:10.024 ************************************ 00:07:10.024 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:10.024 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.024 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.024 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.024 00:07:10.024 real 0m0.029s 00:07:10.024 user 0m0.021s 00:07:10.024 sys 0m0.008s 00:07:10.024 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.024 04:58:10 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:10.024 04:58:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.024 04:58:10 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:10.024 04:58:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:10.024 04:58:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.024 04:58:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.024 ************************************ 00:07:10.024 START TEST accel_crc32c 00:07:10.024 ************************************ 00:07:10.024 04:58:10 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:10.024 04:58:10 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:10.024 [2024-07-23 04:58:10.219428] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:10.024 [2024-07-23 04:58:10.219504] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73412 ] 00:07:10.283 [2024-07-23 04:58:10.354268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.283 [2024-07-23 04:58:10.405030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.283 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.284 04:58:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.284 04:58:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.284 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.284 04:58:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.659 ************************************ 00:07:11.659 END TEST accel_crc32c 00:07:11.659 ************************************ 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:11.659 04:58:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.659 00:07:11.659 real 0m1.396s 00:07:11.659 user 0m1.198s 00:07:11.659 sys 0m0.104s 00:07:11.659 04:58:11 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.659 04:58:11 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:11.659 04:58:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.659 04:58:11 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:11.659 04:58:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:11.659 04:58:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.659 04:58:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.659 ************************************ 00:07:11.659 START TEST accel_crc32c_C2 00:07:11.659 ************************************ 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:11.659 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:11.660 [2024-07-23 04:58:11.668822] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:11.660 [2024-07-23 04:58:11.669527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73448 ] 00:07:11.660 [2024-07-23 04:58:11.804805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.660 [2024-07-23 04:58:11.857495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.919 04:58:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.855 00:07:12.855 real 0m1.399s 00:07:12.855 user 0m0.011s 00:07:12.855 sys 0m0.005s 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.855 04:58:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:12.855 ************************************ 00:07:12.855 END TEST accel_crc32c_C2 00:07:12.855 ************************************ 00:07:13.114 04:58:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.114 04:58:13 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:13.114 04:58:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:13.114 04:58:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.114 04:58:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.114 ************************************ 00:07:13.114 START TEST accel_copy 00:07:13.114 ************************************ 00:07:13.114 04:58:13 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:13.114 04:58:13 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:13.114 [2024-07-23 04:58:13.119039] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:13.114 [2024-07-23 04:58:13.119345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73482 ] 00:07:13.114 [2024-07-23 04:58:13.265429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.114 [2024-07-23 04:58:13.321101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:13.372 04:58:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:14.307 04:58:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.307 00:07:14.307 real 0m1.412s 00:07:14.307 user 0m1.206s 00:07:14.307 sys 0m0.115s 00:07:14.307 04:58:14 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.307 ************************************ 00:07:14.307 END TEST accel_copy 00:07:14.307 ************************************ 00:07:14.307 04:58:14 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:14.566 04:58:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.566 04:58:14 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:14.566 04:58:14 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:14.566 04:58:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.566 04:58:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.566 ************************************ 00:07:14.566 START TEST accel_fill 00:07:14.566 ************************************ 00:07:14.566 04:58:14 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:14.566 04:58:14 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:14.566 [2024-07-23 04:58:14.574735] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:14.566 [2024-07-23 04:58:14.574821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73517 ] 00:07:14.566 [2024-07-23 04:58:14.710409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.566 [2024-07-23 04:58:14.769122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:14.825 04:58:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.761 ************************************ 00:07:15.761 END TEST accel_fill 00:07:15.761 ************************************ 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:15.761 04:58:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.761 00:07:15.761 real 0m1.403s 00:07:15.761 user 0m1.205s 00:07:15.761 sys 0m0.107s 00:07:15.761 04:58:15 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.761 04:58:15 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:16.020 04:58:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.020 04:58:15 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:16.020 04:58:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:16.020 04:58:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.020 04:58:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.020 ************************************ 00:07:16.020 START TEST accel_copy_crc32c 00:07:16.020 ************************************ 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:16.020 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:16.020 [2024-07-23 04:58:16.032827] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:16.020 [2024-07-23 04:58:16.032914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73546 ] 00:07:16.020 [2024-07-23 04:58:16.170640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.020 [2024-07-23 04:58:16.230461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:16.279 04:58:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:17.214 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:17.215 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:17.215 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.215 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:17.215 04:58:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.215 00:07:17.215 real 0m1.407s 00:07:17.215 user 0m1.205s 00:07:17.215 sys 0m0.112s 00:07:17.215 04:58:17 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.215 04:58:17 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:17.215 ************************************ 00:07:17.215 END TEST accel_copy_crc32c 00:07:17.215 ************************************ 00:07:17.473 04:58:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.473 04:58:17 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:17.473 04:58:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:17.473 04:58:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.473 04:58:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.473 ************************************ 00:07:17.473 START TEST accel_copy_crc32c_C2 00:07:17.473 ************************************ 00:07:17.473 04:58:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:17.473 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.473 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:17.473 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.473 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.474 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:17.474 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:17.474 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.474 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.474 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.474 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.474 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.474 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.474 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:17.474 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:17.474 [2024-07-23 04:58:17.487756] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:17.474 [2024-07-23 04:58:17.487858] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73580 ] 00:07:17.474 [2024-07-23 04:58:17.616403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.474 [2024-07-23 04:58:17.669360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.732 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:17.733 04:58:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.668 00:07:18.668 real 0m1.384s 00:07:18.668 user 0m0.014s 00:07:18.668 sys 0m0.004s 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.668 ************************************ 00:07:18.668 END TEST accel_copy_crc32c_C2 00:07:18.668 ************************************ 00:07:18.668 04:58:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:18.927 04:58:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.927 04:58:18 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:18.927 04:58:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:18.927 04:58:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.927 04:58:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.927 ************************************ 00:07:18.927 START TEST accel_dualcast 00:07:18.927 ************************************ 00:07:18.927 04:58:18 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:18.927 04:58:18 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:18.927 [2024-07-23 04:58:18.919244] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:18.927 [2024-07-23 04:58:18.919370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73615 ] 00:07:18.927 [2024-07-23 04:58:19.049433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.927 [2024-07-23 04:58:19.115889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.186 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:19.187 04:58:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:20.124 04:58:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.124 00:07:20.124 real 0m1.411s 00:07:20.124 user 0m1.207s 00:07:20.124 sys 0m0.113s 00:07:20.124 04:58:20 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.124 04:58:20 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:20.124 ************************************ 00:07:20.124 END TEST accel_dualcast 00:07:20.124 ************************************ 00:07:20.383 04:58:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.383 04:58:20 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:20.383 04:58:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:20.383 04:58:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.383 04:58:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.383 ************************************ 00:07:20.383 START TEST accel_compare 00:07:20.383 ************************************ 00:07:20.383 04:58:20 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:20.383 04:58:20 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:20.383 04:58:20 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:20.383 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.383 04:58:20 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:20.383 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.383 04:58:20 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:20.383 04:58:20 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:20.383 04:58:20 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.383 04:58:20 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.383 04:58:20 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.383 04:58:20 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.384 04:58:20 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.384 04:58:20 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:20.384 04:58:20 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:20.384 [2024-07-23 04:58:20.381026] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:20.384 [2024-07-23 04:58:20.381112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73646 ] 00:07:20.384 [2024-07-23 04:58:20.517962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.384 [2024-07-23 04:58:20.576072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:20.643 04:58:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:21.580 04:58:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.580 00:07:21.580 real 0m1.409s 00:07:21.580 user 0m1.203s 00:07:21.580 sys 0m0.113s 00:07:21.580 04:58:21 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.580 04:58:21 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:21.580 ************************************ 00:07:21.580 END TEST accel_compare 00:07:21.580 ************************************ 00:07:21.840 04:58:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.840 04:58:21 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:21.840 04:58:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:21.840 04:58:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.840 04:58:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.840 ************************************ 00:07:21.840 START TEST accel_xor 00:07:21.840 ************************************ 00:07:21.840 04:58:21 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:21.840 04:58:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:21.840 [2024-07-23 04:58:21.843741] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:21.840 [2024-07-23 04:58:21.843859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73684 ] 00:07:21.840 [2024-07-23 04:58:21.971150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.840 [2024-07-23 04:58:22.029480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.099 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:22.100 04:58:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.068 00:07:23.068 real 0m1.391s 00:07:23.068 user 0m1.191s 00:07:23.068 sys 0m0.108s 00:07:23.068 04:58:23 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.068 04:58:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:23.068 ************************************ 00:07:23.068 END TEST accel_xor 00:07:23.068 ************************************ 00:07:23.068 04:58:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.068 04:58:23 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:23.068 04:58:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:23.068 04:58:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.068 04:58:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.068 ************************************ 00:07:23.068 START TEST accel_xor 00:07:23.068 ************************************ 00:07:23.068 04:58:23 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:23.068 04:58:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:23.327 [2024-07-23 04:58:23.283230] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:23.327 [2024-07-23 04:58:23.283955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73713 ] 00:07:23.327 [2024-07-23 04:58:23.412403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.327 [2024-07-23 04:58:23.471351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:23.327 04:58:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:24.705 04:58:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.705 00:07:24.705 real 0m1.404s 00:07:24.705 user 0m1.198s 00:07:24.705 sys 0m0.117s 00:07:24.705 04:58:24 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.705 04:58:24 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:24.705 ************************************ 00:07:24.705 END TEST accel_xor 00:07:24.705 ************************************ 00:07:24.705 04:58:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.705 04:58:24 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:24.705 04:58:24 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:24.705 04:58:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.705 04:58:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.705 ************************************ 00:07:24.705 START TEST accel_dif_verify 00:07:24.705 ************************************ 00:07:24.705 04:58:24 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:24.705 04:58:24 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:24.705 [2024-07-23 04:58:24.731204] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:24.705 [2024-07-23 04:58:24.731304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73747 ] 00:07:24.705 [2024-07-23 04:58:24.866188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.705 [2024-07-23 04:58:24.917668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.964 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:24.964 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.964 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.964 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.964 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:24.964 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.964 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.964 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.964 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:24.964 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.964 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.964 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:24.965 04:58:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.900 04:58:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:25.901 04:58:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:25.901 04:58:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:25.901 04:58:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:25.901 04:58:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.901 04:58:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:25.901 04:58:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.901 00:07:25.901 real 0m1.399s 00:07:25.901 user 0m1.204s 00:07:25.901 sys 0m0.106s 00:07:25.901 04:58:26 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.901 ************************************ 00:07:25.901 END TEST accel_dif_verify 00:07:25.901 ************************************ 00:07:25.901 04:58:26 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:26.160 04:58:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.160 04:58:26 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:26.160 04:58:26 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:26.160 04:58:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.160 04:58:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.160 ************************************ 00:07:26.160 START TEST accel_dif_generate 00:07:26.160 ************************************ 00:07:26.160 04:58:26 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:26.160 04:58:26 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:26.160 [2024-07-23 04:58:26.175590] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:26.160 [2024-07-23 04:58:26.175675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73782 ] 00:07:26.160 [2024-07-23 04:58:26.303654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.160 [2024-07-23 04:58:26.368086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:26.420 04:58:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:27.357 04:58:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.357 00:07:27.357 real 0m1.404s 00:07:27.357 user 0m1.208s 00:07:27.357 sys 0m0.106s 00:07:27.357 04:58:27 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.357 ************************************ 00:07:27.357 END TEST accel_dif_generate 00:07:27.357 ************************************ 00:07:27.358 04:58:27 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:27.617 04:58:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.617 04:58:27 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:27.617 04:58:27 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:27.617 04:58:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.617 04:58:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.617 ************************************ 00:07:27.617 START TEST accel_dif_generate_copy 00:07:27.617 ************************************ 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:27.617 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:27.617 [2024-07-23 04:58:27.627166] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:27.617 [2024-07-23 04:58:27.627699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73811 ] 00:07:27.617 [2024-07-23 04:58:27.761691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.617 [2024-07-23 04:58:27.830188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.876 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.877 04:58:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.816 00:07:28.816 real 0m1.413s 00:07:28.816 user 0m1.200s 00:07:28.816 sys 0m0.118s 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.816 04:58:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:28.816 ************************************ 00:07:28.816 END TEST accel_dif_generate_copy 00:07:28.816 ************************************ 00:07:29.076 04:58:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.076 04:58:29 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:29.076 04:58:29 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.076 04:58:29 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:29.076 04:58:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.076 04:58:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.076 ************************************ 00:07:29.076 START TEST accel_comp 00:07:29.076 ************************************ 00:07:29.076 04:58:29 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:29.076 04:58:29 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:29.076 [2024-07-23 04:58:29.087016] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:29.076 [2024-07-23 04:58:29.087103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73851 ] 00:07:29.076 [2024-07-23 04:58:29.222607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.076 [2024-07-23 04:58:29.283089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.335 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.336 04:58:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:30.272 ************************************ 00:07:30.272 END TEST accel_comp 00:07:30.272 ************************************ 00:07:30.272 04:58:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.272 00:07:30.272 real 0m1.418s 00:07:30.272 user 0m1.209s 00:07:30.272 sys 0m0.119s 00:07:30.272 04:58:30 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.272 04:58:30 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:30.531 04:58:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.531 04:58:30 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:30.531 04:58:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:30.531 04:58:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.531 04:58:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.531 ************************************ 00:07:30.531 START TEST accel_decomp 00:07:30.531 ************************************ 00:07:30.531 04:58:30 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:30.531 04:58:30 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:30.531 04:58:30 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:30.531 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.531 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.532 04:58:30 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:30.532 04:58:30 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:30.532 04:58:30 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:30.532 04:58:30 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.532 04:58:30 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.532 04:58:30 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.532 04:58:30 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.532 04:58:30 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.532 04:58:30 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:30.532 04:58:30 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:30.532 [2024-07-23 04:58:30.555529] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:30.532 [2024-07-23 04:58:30.555634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73880 ] 00:07:30.532 [2024-07-23 04:58:30.683231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.532 [2024-07-23 04:58:30.738541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.791 04:58:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.728 04:58:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.987 00:07:31.987 real 0m1.415s 00:07:31.987 user 0m1.215s 00:07:31.987 sys 0m0.111s 00:07:31.987 04:58:31 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.987 04:58:31 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:31.987 ************************************ 00:07:31.987 END TEST accel_decomp 00:07:31.987 ************************************ 00:07:31.987 04:58:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.987 04:58:31 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.987 04:58:31 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:31.987 04:58:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.987 04:58:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.987 ************************************ 00:07:31.987 START TEST accel_decomp_full 00:07:31.987 ************************************ 00:07:31.987 04:58:31 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.987 04:58:31 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:31.987 04:58:31 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:31.987 04:58:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.987 04:58:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.987 04:58:31 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.987 04:58:32 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:31.987 04:58:32 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:31.987 04:58:32 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.987 04:58:32 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.988 04:58:32 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.988 04:58:32 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.988 04:58:32 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.988 04:58:32 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:31.988 04:58:32 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:31.988 [2024-07-23 04:58:32.021772] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:31.988 [2024-07-23 04:58:32.021861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73920 ] 00:07:31.988 [2024-07-23 04:58:32.152859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.247 [2024-07-23 04:58:32.223518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.247 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.248 04:58:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.629 04:58:33 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.629 00:07:33.629 real 0m1.430s 00:07:33.629 user 0m1.226s 00:07:33.629 sys 0m0.114s 00:07:33.629 04:58:33 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.629 04:58:33 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:33.629 ************************************ 00:07:33.629 END TEST accel_decomp_full 00:07:33.629 ************************************ 00:07:33.629 04:58:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.629 04:58:33 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:33.629 04:58:33 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:33.629 04:58:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.629 04:58:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.629 ************************************ 00:07:33.629 START TEST accel_decomp_mcore 00:07:33.629 ************************************ 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:33.629 [2024-07-23 04:58:33.501674] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:33.629 [2024-07-23 04:58:33.501761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73949 ] 00:07:33.629 [2024-07-23 04:58:33.638583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.629 [2024-07-23 04:58:33.698304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.629 [2024-07-23 04:58:33.698470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.629 [2024-07-23 04:58:33.698572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.629 [2024-07-23 04:58:33.698573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:33.629 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.630 04:58:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.008 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.009 00:07:35.009 real 0m1.416s 00:07:35.009 user 0m4.591s 00:07:35.009 sys 0m0.121s 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.009 04:58:34 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:35.009 ************************************ 00:07:35.009 END TEST accel_decomp_mcore 00:07:35.009 ************************************ 00:07:35.009 04:58:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.009 04:58:34 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.009 04:58:34 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:35.009 04:58:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.009 04:58:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.009 ************************************ 00:07:35.009 START TEST accel_decomp_full_mcore 00:07:35.009 ************************************ 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:35.009 04:58:34 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:35.009 [2024-07-23 04:58:34.967471] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:35.009 [2024-07-23 04:58:34.967558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73981 ] 00:07:35.009 [2024-07-23 04:58:35.101982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.009 [2024-07-23 04:58:35.157640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.009 [2024-07-23 04:58:35.157776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.009 [2024-07-23 04:58:35.157934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.009 [2024-07-23 04:58:35.157925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.009 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.318 04:58:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.250 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.251 00:07:36.251 real 0m1.427s 00:07:36.251 user 0m4.618s 00:07:36.251 sys 0m0.138s 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.251 04:58:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:36.251 ************************************ 00:07:36.251 END TEST accel_decomp_full_mcore 00:07:36.251 ************************************ 00:07:36.251 04:58:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.251 04:58:36 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:36.251 04:58:36 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:36.251 04:58:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.251 04:58:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.251 ************************************ 00:07:36.251 START TEST accel_decomp_mthread 00:07:36.251 ************************************ 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:36.251 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:36.251 [2024-07-23 04:58:36.438205] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:36.251 [2024-07-23 04:58:36.438288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74024 ] 00:07:36.510 [2024-07-23 04:58:36.560021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.510 [2024-07-23 04:58:36.617453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.510 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.511 04:58:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.888 00:07:37.888 real 0m1.390s 00:07:37.888 user 0m1.197s 00:07:37.888 sys 0m0.103s 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.888 04:58:37 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:37.888 ************************************ 00:07:37.888 END TEST accel_decomp_mthread 00:07:37.888 ************************************ 00:07:37.888 04:58:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.888 04:58:37 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.888 04:58:37 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:37.888 04:58:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.888 04:58:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.888 ************************************ 00:07:37.888 START TEST accel_decomp_full_mthread 00:07:37.888 ************************************ 00:07:37.888 04:58:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:37.889 04:58:37 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:37.889 [2024-07-23 04:58:37.880051] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:37.889 [2024-07-23 04:58:37.880136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74053 ] 00:07:37.889 [2024-07-23 04:58:38.016740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.889 [2024-07-23 04:58:38.075191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.147 04:58:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.081 00:07:39.081 real 0m1.433s 00:07:39.081 user 0m1.233s 00:07:39.081 sys 0m0.110s 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.081 ************************************ 00:07:39.081 END TEST accel_decomp_full_mthread 00:07:39.081 ************************************ 00:07:39.081 04:58:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:39.340 04:58:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.340 04:58:39 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:39.340 04:58:39 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:39.340 04:58:39 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:39.340 04:58:39 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:39.340 04:58:39 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.340 04:58:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.340 04:58:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.340 04:58:39 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.340 04:58:39 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.340 04:58:39 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.340 04:58:39 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.340 04:58:39 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:39.340 04:58:39 accel -- accel/accel.sh@41 -- # jq -r . 00:07:39.340 ************************************ 00:07:39.340 START TEST accel_dif_functional_tests 00:07:39.340 ************************************ 00:07:39.340 04:58:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:39.340 [2024-07-23 04:58:39.396848] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:39.340 [2024-07-23 04:58:39.396930] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74094 ] 00:07:39.340 [2024-07-23 04:58:39.525886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.598 [2024-07-23 04:58:39.580870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.598 [2024-07-23 04:58:39.581018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.598 [2024-07-23 04:58:39.581031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.598 00:07:39.598 00:07:39.598 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.598 http://cunit.sourceforge.net/ 00:07:39.598 00:07:39.598 00:07:39.598 Suite: accel_dif 00:07:39.598 Test: verify: DIF generated, GUARD check ...passed 00:07:39.598 Test: verify: DIF generated, APPTAG check ...passed 00:07:39.598 Test: verify: DIF generated, REFTAG check ...passed 00:07:39.598 Test: verify: DIF not generated, GUARD check ...[2024-07-23 04:58:39.684069] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:39.598 passed 00:07:39.598 Test: verify: DIF not generated, APPTAG check ...passed 00:07:39.598 Test: verify: DIF not generated, REFTAG check ...passed 00:07:39.598 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:39.598 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 04:58:39.684204] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:39.598 [2024-07-23 04:58:39.684267] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:39.598 [2024-07-23 04:58:39.684383] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:39.598 passed 00:07:39.598 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:39.598 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:39.598 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:39.598 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:39.598 Test: verify copy: DIF generated, GUARD check ...[2024-07-23 04:58:39.684627] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:39.598 passed 00:07:39.598 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:39.598 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:39.598 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:39.598 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 04:58:39.684907] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:39.598 [2024-07-23 04:58:39.684960] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:39.598 passed 00:07:39.598 Test: verify copy: DIF not generated, REFTAG check ...passed 00:07:39.598 Test: generate copy: DIF generated, GUARD check ...passed 00:07:39.598 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:39.598 Test: generate copy: DIF generated, REFTAG check ...[2024-07-23 04:58:39.685008] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:39.598 passed 00:07:39.598 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:39.598 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:39.598 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:39.598 Test: generate copy: iovecs-len validate ...[2024-07-23 04:58:39.685379] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:39.598 passed 00:07:39.598 Test: generate copy: buffer alignment validate ...passed 00:07:39.598 00:07:39.598 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.598 suites 1 1 n/a 0 0 00:07:39.598 tests 26 26 26 0 0 00:07:39.598 asserts 115 115 115 0 n/a 00:07:39.598 00:07:39.598 Elapsed time = 0.005 seconds 00:07:39.855 00:07:39.855 real 0m0.555s 00:07:39.855 user 0m0.784s 00:07:39.855 sys 0m0.152s 00:07:39.855 04:58:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.855 04:58:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:39.855 ************************************ 00:07:39.855 END TEST accel_dif_functional_tests 00:07:39.855 ************************************ 00:07:39.855 04:58:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.855 00:07:39.855 real 0m32.430s 00:07:39.855 user 0m34.315s 00:07:39.855 sys 0m3.853s 00:07:39.855 04:58:39 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.855 04:58:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.855 ************************************ 00:07:39.855 END TEST accel 00:07:39.855 ************************************ 00:07:39.855 04:58:39 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.855 04:58:39 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:39.855 04:58:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.855 04:58:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.855 04:58:39 -- common/autotest_common.sh@10 -- # set +x 00:07:39.855 ************************************ 00:07:39.855 START TEST accel_rpc 00:07:39.855 ************************************ 00:07:39.855 04:58:39 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:39.855 * Looking for test storage... 00:07:39.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:39.855 04:58:40 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:39.855 04:58:40 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=74158 00:07:39.855 04:58:40 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 74158 00:07:39.855 04:58:40 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 74158 ']' 00:07:39.855 04:58:40 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.855 04:58:40 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:39.855 04:58:40 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.855 04:58:40 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.855 04:58:40 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.855 04:58:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.112 [2024-07-23 04:58:40.145510] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:40.112 [2024-07-23 04:58:40.145641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74158 ] 00:07:40.112 [2024-07-23 04:58:40.283791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.370 [2024-07-23 04:58:40.362507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.935 04:58:41 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.935 04:58:41 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:40.935 04:58:41 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:40.935 04:58:41 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:40.935 04:58:41 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:40.935 04:58:41 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:40.935 04:58:41 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:40.935 04:58:41 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:40.935 04:58:41 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.935 04:58:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.935 ************************************ 00:07:40.935 START TEST accel_assign_opcode 00:07:40.935 ************************************ 00:07:40.935 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:40.935 04:58:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:40.935 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.935 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.935 [2024-07-23 04:58:41.086977] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:40.935 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.935 04:58:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:40.935 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.935 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:40.935 [2024-07-23 04:58:41.094966] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:40.935 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.935 04:58:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:40.935 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.935 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:41.193 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.193 04:58:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:41.193 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.193 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:41.193 04:58:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:41.193 04:58:41 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:41.193 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.193 software 00:07:41.193 00:07:41.193 real 0m0.275s 00:07:41.193 user 0m0.054s 00:07:41.193 sys 0m0.014s 00:07:41.193 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.193 04:58:41 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:41.193 ************************************ 00:07:41.193 END TEST accel_assign_opcode 00:07:41.193 ************************************ 00:07:41.193 04:58:41 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:41.193 04:58:41 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 74158 00:07:41.193 04:58:41 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 74158 ']' 00:07:41.193 04:58:41 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 74158 00:07:41.193 04:58:41 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:41.193 04:58:41 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:41.193 04:58:41 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74158 00:07:41.451 04:58:41 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:41.452 04:58:41 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:41.452 killing process with pid 74158 00:07:41.452 04:58:41 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74158' 00:07:41.452 04:58:41 accel_rpc -- common/autotest_common.sh@967 -- # kill 74158 00:07:41.452 04:58:41 accel_rpc -- common/autotest_common.sh@972 -- # wait 74158 00:07:41.710 00:07:41.710 real 0m1.792s 00:07:41.710 user 0m1.887s 00:07:41.710 sys 0m0.430s 00:07:41.710 04:58:41 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.710 04:58:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.710 ************************************ 00:07:41.710 END TEST accel_rpc 00:07:41.710 ************************************ 00:07:41.710 04:58:41 -- common/autotest_common.sh@1142 -- # return 0 00:07:41.710 04:58:41 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:41.710 04:58:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.710 04:58:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.710 04:58:41 -- common/autotest_common.sh@10 -- # set +x 00:07:41.710 ************************************ 00:07:41.710 START TEST app_cmdline 00:07:41.710 ************************************ 00:07:41.710 04:58:41 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:41.710 * Looking for test storage... 00:07:41.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:41.710 04:58:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:41.710 04:58:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=74247 00:07:41.710 04:58:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 74247 00:07:41.710 04:58:41 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:41.710 04:58:41 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 74247 ']' 00:07:41.710 04:58:41 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.710 04:58:41 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.710 04:58:41 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.710 04:58:41 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.710 04:58:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:41.968 [2024-07-23 04:58:41.980834] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:41.968 [2024-07-23 04:58:41.980927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74247 ] 00:07:41.968 [2024-07-23 04:58:42.117481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.968 [2024-07-23 04:58:42.179144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.901 04:58:42 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.901 04:58:42 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:42.901 04:58:42 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:43.159 { 00:07:43.159 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:07:43.159 "fields": { 00:07:43.159 "major": 24, 00:07:43.159 "minor": 9, 00:07:43.159 "patch": 0, 00:07:43.159 "suffix": "-pre", 00:07:43.159 "commit": "f7b31b2b9" 00:07:43.159 } 00:07:43.159 } 00:07:43.159 04:58:43 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:43.159 04:58:43 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:43.159 04:58:43 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:43.159 04:58:43 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:43.159 04:58:43 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:43.159 04:58:43 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:43.159 04:58:43 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:43.159 04:58:43 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.159 04:58:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.159 04:58:43 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.159 04:58:43 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:43.159 04:58:43 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:43.159 04:58:43 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.159 04:58:43 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:43.159 04:58:43 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.159 04:58:43 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.159 04:58:43 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.159 04:58:43 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.160 04:58:43 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.160 04:58:43 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.160 04:58:43 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.160 04:58:43 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.160 04:58:43 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:43.160 04:58:43 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.418 request: 00:07:43.418 { 00:07:43.418 "method": "env_dpdk_get_mem_stats", 00:07:43.418 "req_id": 1 00:07:43.418 } 00:07:43.418 Got JSON-RPC error response 00:07:43.418 response: 00:07:43.418 { 00:07:43.418 "code": -32601, 00:07:43.418 "message": "Method not found" 00:07:43.418 } 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:43.418 04:58:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 74247 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 74247 ']' 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 74247 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74247 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.418 killing process with pid 74247 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74247' 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@967 -- # kill 74247 00:07:43.418 04:58:43 app_cmdline -- common/autotest_common.sh@972 -- # wait 74247 00:07:43.694 00:07:43.694 real 0m2.067s 00:07:43.694 user 0m2.632s 00:07:43.694 sys 0m0.446s 00:07:43.694 04:58:43 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.694 04:58:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.694 ************************************ 00:07:43.694 END TEST app_cmdline 00:07:43.694 ************************************ 00:07:43.961 04:58:43 -- common/autotest_common.sh@1142 -- # return 0 00:07:43.961 04:58:43 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:43.961 04:58:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.961 04:58:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.961 04:58:43 -- common/autotest_common.sh@10 -- # set +x 00:07:43.961 ************************************ 00:07:43.961 START TEST version 00:07:43.961 ************************************ 00:07:43.961 04:58:43 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:43.961 * Looking for test storage... 00:07:43.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:43.961 04:58:44 version -- app/version.sh@17 -- # get_header_version major 00:07:43.961 04:58:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:43.961 04:58:44 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.961 04:58:44 version -- app/version.sh@14 -- # cut -f2 00:07:43.961 04:58:44 version -- app/version.sh@17 -- # major=24 00:07:43.961 04:58:44 version -- app/version.sh@18 -- # get_header_version minor 00:07:43.961 04:58:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:43.961 04:58:44 version -- app/version.sh@14 -- # cut -f2 00:07:43.961 04:58:44 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.961 04:58:44 version -- app/version.sh@18 -- # minor=9 00:07:43.961 04:58:44 version -- app/version.sh@19 -- # get_header_version patch 00:07:43.961 04:58:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:43.961 04:58:44 version -- app/version.sh@14 -- # cut -f2 00:07:43.961 04:58:44 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.961 04:58:44 version -- app/version.sh@19 -- # patch=0 00:07:43.961 04:58:44 version -- app/version.sh@20 -- # get_header_version suffix 00:07:43.961 04:58:44 version -- app/version.sh@14 -- # cut -f2 00:07:43.961 04:58:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:43.961 04:58:44 version -- app/version.sh@14 -- # tr -d '"' 00:07:43.961 04:58:44 version -- app/version.sh@20 -- # suffix=-pre 00:07:43.962 04:58:44 version -- app/version.sh@22 -- # version=24.9 00:07:43.962 04:58:44 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:43.962 04:58:44 version -- app/version.sh@28 -- # version=24.9rc0 00:07:43.962 04:58:44 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:43.962 04:58:44 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:43.962 04:58:44 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:43.962 04:58:44 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:43.962 00:07:43.962 real 0m0.145s 00:07:43.962 user 0m0.080s 00:07:43.962 sys 0m0.094s 00:07:43.962 04:58:44 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.962 04:58:44 version -- common/autotest_common.sh@10 -- # set +x 00:07:43.962 ************************************ 00:07:43.962 END TEST version 00:07:43.962 ************************************ 00:07:43.962 04:58:44 -- common/autotest_common.sh@1142 -- # return 0 00:07:43.962 04:58:44 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:43.962 04:58:44 -- spdk/autotest.sh@198 -- # uname -s 00:07:43.962 04:58:44 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:43.962 04:58:44 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:43.962 04:58:44 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:43.962 04:58:44 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:43.962 04:58:44 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:43.962 04:58:44 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:43.962 04:58:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.962 04:58:44 -- common/autotest_common.sh@10 -- # set +x 00:07:43.962 04:58:44 -- spdk/autotest.sh@262 -- # '[' 1 -eq 1 ']' 00:07:43.962 04:58:44 -- spdk/autotest.sh@263 -- # run_test iscsi_tgt /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:07:43.962 04:58:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.962 04:58:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.962 04:58:44 -- common/autotest_common.sh@10 -- # set +x 00:07:43.962 ************************************ 00:07:43.962 START TEST iscsi_tgt 00:07:43.962 ************************************ 00:07:43.962 04:58:44 iscsi_tgt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:07:44.220 * Looking for test storage... 00:07:44.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # uname -s 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@18 -- # iscsicleanup 00:07:44.220 04:58:44 iscsi_tgt -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:07:44.220 Cleaning up iSCSI connection 00:07:44.220 04:58:44 iscsi_tgt -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:07:44.220 iscsiadm: No matching sessions found 00:07:44.220 04:58:44 iscsi_tgt -- common/autotest_common.sh@981 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:07:44.220 iscsiadm: No records found 00:07:44.220 04:58:44 iscsi_tgt -- common/autotest_common.sh@982 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- common/autotest_common.sh@983 -- # rm -rf 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@21 -- # create_veth_interfaces 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # ip link set init_br nomaster 00:07:44.220 Cannot find device "init_br" 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # ip link set tgt_br nomaster 00:07:44.220 Cannot find device "tgt_br" 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # ip link set tgt_br2 nomaster 00:07:44.220 Cannot find device "tgt_br2" 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # ip link set init_br down 00:07:44.220 Cannot find device "init_br" 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # ip link set tgt_br down 00:07:44.220 Cannot find device "tgt_br" 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # ip link set tgt_br2 down 00:07:44.220 Cannot find device "tgt_br2" 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # ip link delete iscsi_br type bridge 00:07:44.220 Cannot find device "iscsi_br" 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # ip link delete spdk_init_int 00:07:44.220 Cannot find device "spdk_init_int" 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:07:44.220 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:07:44.220 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # ip netns del spdk_iscsi_ns 00:07:44.220 Cannot remove namespace file "/var/run/netns/spdk_iscsi_ns": No such file or directory 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # true 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@44 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@47 -- # ip netns add spdk_iscsi_ns 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@50 -- # ip link add spdk_init_int type veth peer name init_br 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@51 -- # ip link add spdk_tgt_int type veth peer name tgt_br 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@52 -- # ip link add spdk_tgt_int2 type veth peer name tgt_br2 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@55 -- # ip link set spdk_tgt_int netns spdk_iscsi_ns 00:07:44.220 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@56 -- # ip link set spdk_tgt_int2 netns spdk_iscsi_ns 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@59 -- # ip addr add 10.0.0.2/24 dev spdk_init_int 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@60 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.1/24 dev spdk_tgt_int 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@61 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.3/24 dev spdk_tgt_int2 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@64 -- # ip link set spdk_init_int up 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@65 -- # ip link set init_br up 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@66 -- # ip link set tgt_br up 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@67 -- # ip link set tgt_br2 up 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@68 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int up 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@69 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int2 up 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@70 -- # ip netns exec spdk_iscsi_ns ip link set lo up 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@73 -- # ip link add iscsi_br type bridge 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@74 -- # ip link set iscsi_br up 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@77 -- # ip link set init_br master iscsi_br 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@78 -- # ip link set tgt_br master iscsi_br 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@79 -- # ip link set tgt_br2 master iscsi_br 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@82 -- # iptables -I INPUT 1 -i spdk_init_int -p tcp --dport 3260 -j ACCEPT 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@83 -- # iptables -A FORWARD -i iscsi_br -o iscsi_br -j ACCEPT 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@86 -- # ping -c 1 10.0.0.1 00:07:44.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:07:44.478 00:07:44.478 --- 10.0.0.1 ping statistics --- 00:07:44.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.478 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@87 -- # ping -c 1 10.0.0.3 00:07:44.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:44.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:07:44.478 00:07:44.478 --- 10.0.0.3 ping statistics --- 00:07:44.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.478 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@88 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:07:44.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:07:44.478 00:07:44.478 --- 10.0.0.2 ping statistics --- 00:07:44.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.478 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/common.sh@89 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:07:44.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.021 ms 00:07:44.478 00:07:44.478 --- 10.0.0.2 ping statistics --- 00:07:44.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.478 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@23 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:07:44.478 04:58:44 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@25 -- # run_test iscsi_tgt_sock /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:07:44.478 04:58:44 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.478 04:58:44 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.478 04:58:44 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:07:44.478 ************************************ 00:07:44.478 START TEST iscsi_tgt_sock 00:07:44.478 ************************************ 00:07:44.478 04:58:44 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:07:44.735 * Looking for test storage... 00:07:44.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:44.735 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@48 -- # iscsitestinit 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@50 -- # HELLO_SOCK_APP='ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/examples/hello_sock' 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@51 -- # SOCAT_APP=socat 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@52 -- # OPENSSL_APP=openssl 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@53 -- # PSK='-N ssl -E 1234567890ABCDEF -I psk.spdk.io' 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@58 -- # timing_enter sock_client 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:07:44.736 Testing client path 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@59 -- # echo 'Testing client path' 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@63 -- # server_pid=74564 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@64 -- # trap 'killprocess $server_pid;iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@62 -- # socat tcp-l:3260,fork,bind=10.0.0.2 exec:/bin/cat 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@66 -- # waitfortcp 74564 10.0.0.2:3260 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@25 -- # local addr=10.0.0.2:3260 00:07:44.736 Waiting for process to start up and listen on address 10.0.0.2:3260... 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@27 -- # echo 'Waiting for process to start up and listen on address 10.0.0.2:3260...' 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@29 -- # xtrace_disable 00:07:44.736 04:58:44 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:07:45.302 [2024-07-23 04:58:45.245905] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:45.302 [2024-07-23 04:58:45.245990] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74574 ] 00:07:45.302 [2024-07-23 04:58:45.384372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.302 [2024-07-23 04:58:45.455345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.302 [2024-07-23 04:58:45.455420] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:45.302 [2024-07-23 04:58:45.455450] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:07:45.302 [2024-07-23 04:58:45.455627] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 57158) 00:07:45.302 [2024-07-23 04:58:45.455712] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:46.678 [2024-07-23 04:58:46.455732] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:46.678 [2024-07-23 04:58:46.455850] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:46.678 [2024-07-23 04:58:46.537869] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:46.678 [2024-07-23 04:58:46.537958] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74593 ] 00:07:46.678 [2024-07-23 04:58:46.670942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.678 [2024-07-23 04:58:46.734161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.678 [2024-07-23 04:58:46.734241] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:46.678 [2024-07-23 04:58:46.734264] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:07:46.678 [2024-07-23 04:58:46.734449] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 57170) 00:07:46.678 [2024-07-23 04:58:46.734510] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:47.612 [2024-07-23 04:58:47.734525] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:47.612 [2024-07-23 04:58:47.734662] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:47.613 [2024-07-23 04:58:47.821069] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:47.613 [2024-07-23 04:58:47.821163] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74617 ] 00:07:47.871 [2024-07-23 04:58:47.956695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.871 [2024-07-23 04:58:48.021421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.871 [2024-07-23 04:58:48.021522] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:47.871 [2024-07-23 04:58:48.021545] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:07:47.871 [2024-07-23 04:58:48.021839] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 55476) 00:07:47.871 [2024-07-23 04:58:48.021988] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:48.809 [2024-07-23 04:58:49.022006] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:48.809 [2024-07-23 04:58:49.022172] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:49.069 killing process with pid 74564 00:07:49.069 Testing SSL server path 00:07:49.069 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:07:49.069 [2024-07-23 04:58:49.179484] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:49.069 [2024-07-23 04:58:49.179579] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74655 ] 00:07:49.327 [2024-07-23 04:58:49.309672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.327 [2024-07-23 04:58:49.365856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.327 [2024-07-23 04:58:49.366003] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:49.327 [2024-07-23 04:58:49.366090] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(ssl) 00:07:49.586 [2024-07-23 04:58:49.692653] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:49.586 [2024-07-23 04:58:49.692754] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74666 ] 00:07:49.846 [2024-07-23 04:58:49.828480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.846 [2024-07-23 04:58:49.905386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.846 [2024-07-23 04:58:49.905485] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:49.846 [2024-07-23 04:58:49.905521] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:49.846 [2024-07-23 04:58:49.908610] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 57572) 00:07:49.847 [2024-07-23 04:58:49.908814] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 57572) to (10.0.0.1, 3260) 00:07:49.847 [2024-07-23 04:58:49.910166] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:50.784 [2024-07-23 04:58:50.910220] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:50.784 [2024-07-23 04:58:50.910320] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:50.784 [2024-07-23 04:58:50.910367] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:51.043 [2024-07-23 04:58:51.013814] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:51.043 [2024-07-23 04:58:51.013916] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74688 ] 00:07:51.044 [2024-07-23 04:58:51.145991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.044 [2024-07-23 04:58:51.210258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.044 [2024-07-23 04:58:51.210384] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:51.044 [2024-07-23 04:58:51.210409] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:51.044 [2024-07-23 04:58:51.211913] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 57580) to (10.0.0.1, 3260) 00:07:51.044 [2024-07-23 04:58:51.212715] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 57580) 00:07:51.044 [2024-07-23 04:58:51.213922] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:52.422 [2024-07-23 04:58:52.213975] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:52.422 [2024-07-23 04:58:52.214058] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:52.422 [2024-07-23 04:58:52.214094] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:52.422 [2024-07-23 04:58:52.299681] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:52.422 [2024-07-23 04:58:52.299775] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74704 ] 00:07:52.422 [2024-07-23 04:58:52.435477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.422 [2024-07-23 04:58:52.489794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.422 [2024-07-23 04:58:52.489900] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:52.422 [2024-07-23 04:58:52.489922] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:52.422 [2024-07-23 04:58:52.491017] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 57590) to (10.0.0.1, 3260) 00:07:52.422 [2024-07-23 04:58:52.492129] posix.c: 755:posix_sock_create_ssl_context: *ERROR*: Incorrect TLS version provided: 7 00:07:52.422 [2024-07-23 04:58:52.492189] posix.c:1033:posix_sock_create: *ERROR*: posix_sock_create_ssl_context() failed, errno = 2 00:07:52.422 [2024-07-23 04:58:52.492221] hello_sock.c: 309:hello_sock_connect: *ERROR*: connect error(2): No such file or directory 00:07:52.422 [2024-07-23 04:58:52.492232] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.422 [2024-07-23 04:58:52.492267] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:07:52.422 [2024-07-23 04:58:52.492277] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:52.422 [2024-07-23 04:58:52.492271] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:52.422 [2024-07-23 04:58:52.558045] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:52.422 [2024-07-23 04:58:52.558137] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74714 ] 00:07:52.682 [2024-07-23 04:58:52.687617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.682 [2024-07-23 04:58:52.747885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.682 [2024-07-23 04:58:52.747973] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:52.682 [2024-07-23 04:58:52.747997] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:52.682 [2024-07-23 04:58:52.749345] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 57598) to (10.0.0.1, 3260) 00:07:52.682 [2024-07-23 04:58:52.750359] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 57598) 00:07:52.682 [2024-07-23 04:58:52.751348] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:53.653 [2024-07-23 04:58:53.751390] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:53.653 [2024-07-23 04:58:53.751472] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:53.653 [2024-07-23 04:58:53.751506] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:53.653 SSL_connect:before SSL initialization 00:07:53.653 SSL_connect:SSLv3/TLS write client hello 00:07:53.911 [2024-07-23 04:58:53.872440] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 58796) to (10.0.0.1, 3260) 00:07:53.911 SSL_connect:SSLv3/TLS write client hello 00:07:53.911 SSL_connect:SSLv3/TLS read server hello 00:07:53.911 Can't use SSL_get_servername 00:07:53.911 SSL_connect:TLSv1.3 read encrypted extensions 00:07:53.911 SSL_connect:SSLv3/TLS read finished 00:07:53.911 SSL_connect:SSLv3/TLS write change cipher spec 00:07:53.911 SSL_connect:SSLv3/TLS write finished 00:07:53.911 SSL_connect:SSL negotiation finished successfully 00:07:53.911 SSL_connect:SSL negotiation finished successfully 00:07:53.911 SSL_connect:SSLv3/TLS read server session ticket 00:07:55.812 DONE 00:07:55.812 [2024-07-23 04:58:55.816494] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:55.812 SSL3 alert write:warning:close notify 00:07:55.812 [2024-07-23 04:58:55.848187] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:55.812 [2024-07-23 04:58:55.848277] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74759 ] 00:07:55.812 [2024-07-23 04:58:55.988069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.071 [2024-07-23 04:58:56.058450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.071 [2024-07-23 04:58:56.058556] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:56.071 [2024-07-23 04:58:56.058585] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:56.071 [2024-07-23 04:58:56.059664] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 57604) to (10.0.0.1, 3260) 00:07:56.071 [2024-07-23 04:58:56.061291] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 57604) 00:07:56.071 [2024-07-23 04:58:56.062026] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:56.071 [2024-07-23 04:58:56.062038] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:07:57.011 [2024-07-23 04:58:57.062024] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:57.011 [2024-07-23 04:58:57.062136] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.011 [2024-07-23 04:58:57.062176] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:07:57.011 [2024-07-23 04:58:57.062187] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:57.011 [2024-07-23 04:58:57.144928] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:57.011 [2024-07-23 04:58:57.145026] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74774 ] 00:07:57.269 [2024-07-23 04:58:57.283016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.269 [2024-07-23 04:58:57.336219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.269 [2024-07-23 04:58:57.336321] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:57.269 [2024-07-23 04:58:57.336356] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:57.269 [2024-07-23 04:58:57.337081] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 57606) to (10.0.0.1, 3260) 00:07:57.269 [2024-07-23 04:58:57.338743] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 57606) 00:07:57.269 [2024-07-23 04:58:57.339239] posix.c: 586:posix_sock_psk_find_session_server_cb: *ERROR*: Unknown Client's PSK ID 00:07:57.269 [2024-07-23 04:58:57.339301] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:57.269 [2024-07-23 04:58:57.339304] hello_sock.c: 240:hello_sock_writev_poll: *ERROR*: Write to socket failed. Closing connection... 00:07:57.269 [2024-07-23 04:58:57.339351] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:07:58.205 [2024-07-23 04:58:58.339338] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:58.205 [2024-07-23 04:58:58.339455] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.205 [2024-07-23 04:58:58.339494] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:07:58.205 [2024-07-23 04:58:58.339504] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:58.205 killing process with pid 74655 00:07:59.577 [2024-07-23 04:58:59.413711] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:59.577 [2024-07-23 04:58:59.413877] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:59.577 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:07:59.577 [2024-07-23 04:58:59.536118] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:07:59.577 [2024-07-23 04:58:59.536205] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74829 ] 00:07:59.577 [2024-07-23 04:58:59.674115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.577 [2024-07-23 04:58:59.728113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.577 [2024-07-23 04:58:59.728218] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:59.577 [2024-07-23 04:58:59.728292] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(posix) 00:07:59.835 [2024-07-23 04:59:00.033679] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 51554) to (10.0.0.1, 3260) 00:07:59.835 [2024-07-23 04:59:00.033815] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:08:00.092 killing process with pid 74829 00:08:01.026 [2024-07-23 04:59:01.064712] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:08:01.026 [2024-07-23 04:59:01.064801] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:08:01.026 00:08:01.026 real 0m16.526s 00:08:01.026 user 0m19.356s 00:08:01.026 sys 0m2.313s 00:08:01.026 04:59:01 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.026 ************************************ 00:08:01.026 END TEST iscsi_tgt_sock 00:08:01.026 04:59:01 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:08:01.026 ************************************ 00:08:01.026 04:59:01 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:08:01.026 04:59:01 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@26 -- # [[ -d /usr/local/calsoft ]] 00:08:01.026 04:59:01 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@27 -- # run_test iscsi_tgt_calsoft /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:08:01.026 04:59:01 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:01.026 04:59:01 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.026 04:59:01 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:01.026 ************************************ 00:08:01.026 START TEST iscsi_tgt_calsoft 00:08:01.026 ************************************ 00:08:01.026 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:08:01.285 * Looking for test storage... 00:08:01.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@15 -- # MALLOC_BDEV_SIZE=64 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@16 -- # MALLOC_BLOCK_SIZE=512 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@18 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@19 -- # calsoft_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@22 -- # mkdir -p /usr/local/etc 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@23 -- # cp /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/its.conf /usr/local/etc/ 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@26 -- # echo IP=10.0.0.1 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@28 -- # timing_enter start_iscsi_tgt 00:08:01.285 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@30 -- # iscsitestinit 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@33 -- # pid=74910 00:08:01.286 Process pid: 74910 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@34 -- # echo 'Process pid: 74910' 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@32 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x1 --wait-for-rpc 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@36 -- # trap 'killprocess $pid; delete_tmp_conf_files; iscsitestfini; exit 1 ' SIGINT SIGTERM EXIT 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@38 -- # waitforlisten 74910 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@829 -- # '[' -z 74910 ']' 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.286 04:59:01 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:08:01.286 [2024-07-23 04:59:01.419476] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:08:01.286 [2024-07-23 04:59:01.419604] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74910 ] 00:08:01.544 [2024-07-23 04:59:01.569436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.544 [2024-07-23 04:59:01.628543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.111 04:59:02 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.111 04:59:02 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@862 -- # return 0 00:08:02.111 04:59:02 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:08:02.369 04:59:02 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:08:02.936 iscsi_tgt is listening. Running tests... 00:08:02.936 04:59:02 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@41 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:02.936 04:59:02 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@43 -- # timing_exit start_iscsi_tgt 00:08:02.936 04:59:02 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.936 04:59:02 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:08:02.936 04:59:02 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_auth_group 1 -c 'user:root secret:tester' 00:08:03.198 04:59:03 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_discovery_auth -g 1 00:08:03.457 04:59:03 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:08:03.715 04:59:03 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:08:03.973 04:59:03 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create -b MyBdev 64 512 00:08:04.233 MyBdev 00:08:04.233 04:59:04 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -g 1 00:08:04.233 04:59:04 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@55 -- # sleep 1 00:08:05.608 04:59:05 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@57 -- # '[' '' ']' 00:08:05.608 04:59:05 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py /home/vagrant/spdk_repo/spdk/../output 00:08:05.608 [2024-07-23 04:59:05.522068] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:05.608 [2024-07-23 04:59:05.539120] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:05.608 [2024-07-23 04:59:05.557408] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:05.608 [2024-07-23 04:59:05.557546] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:05.608 [2024-07-23 04:59:05.637775] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:05.608 [2024-07-23 04:59:05.638052] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:05.608 [2024-07-23 04:59:05.654049] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:08:05.608 [2024-07-23 04:59:05.670806] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:05.608 [2024-07-23 04:59:05.670972] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:08:05.608 [2024-07-23 04:59:05.671337] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:08:05.608 [2024-07-23 04:59:05.701587] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:05.608 [2024-07-23 04:59:05.701694] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:05.608 [2024-07-23 04:59:05.718252] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(2) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:08:05.608 [2024-07-23 04:59:05.718385] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:05.608 [2024-07-23 04:59:05.718586] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:05.608 [2024-07-23 04:59:05.785077] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:08:05.608 [2024-07-23 04:59:05.802244] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:05.608 [2024-07-23 04:59:05.819053] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:05.867 [2024-07-23 04:59:05.883751] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:08:05.867 [2024-07-23 04:59:05.916263] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:05.867 [2024-07-23 04:59:05.962616] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:08:05.867 PDU 00:08:05.867 00000000 01 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:08:05.867 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:08:05.867 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:08:05.867 [2024-07-23 04:59:05.962692] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:08:05.867 [2024-07-23 04:59:05.980086] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:05.867 [2024-07-23 04:59:05.980448] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:05.867 [2024-07-23 04:59:05.998355] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:05.867 [2024-07-23 04:59:05.998477] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:05.867 [2024-07-23 04:59:06.014875] param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 276 00:08:05.867 [2024-07-23 04:59:06.014929] iscsi.c:1303:iscsi_op_login_store_incoming_params: *ERROR*: iscsi_parse_params() failed 00:08:05.867 [2024-07-23 04:59:06.063118] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:05.867 [2024-07-23 04:59:06.063229] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:05.867 [2024-07-23 04:59:06.080501] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:05.867 [2024-07-23 04:59:06.080812] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:06.127 [2024-07-23 04:59:06.114964] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:06.127 [2024-07-23 04:59:06.115412] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:06.127 [2024-07-23 04:59:06.131219] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:08:06.127 [2024-07-23 04:59:06.147722] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:06.127 [2024-07-23 04:59:06.147901] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:06.127 [2024-07-23 04:59:06.199273] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:08:06.127 [2024-07-23 04:59:06.250180] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:06.127 [2024-07-23 04:59:06.250890] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:06.127 [2024-07-23 04:59:06.269361] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:06.127 [2024-07-23 04:59:06.301988] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:06.127 [2024-07-23 04:59:06.302145] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:06.696 [2024-07-23 04:59:06.621556] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:08:06.696 [2024-07-23 04:59:06.672781] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:06.696 [2024-07-23 04:59:06.689290] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:06.696 [2024-07-23 04:59:06.689749] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:06.696 [2024-07-23 04:59:06.704809] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:06.696 [2024-07-23 04:59:06.704960] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:06.696 [2024-07-23 04:59:06.738309] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:06.696 [2024-07-23 04:59:06.754632] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:06.696 [2024-07-23 04:59:06.754765] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:06.696 [2024-07-23 04:59:06.770665] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:06.696 [2024-07-23 04:59:06.816983] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:06.696 [2024-07-23 04:59:06.817117] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:06.696 [2024-07-23 04:59:06.902168] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:06.697 [2024-07-23 04:59:06.915143] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:08:06.955 [2024-07-23 04:59:06.945983] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:06.955 [2024-07-23 04:59:06.964020] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:06.955 [2024-07-23 04:59:07.027016] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:06.955 [2024-07-23 04:59:07.027405] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:06.955 [2024-07-23 04:59:07.059313] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:08:06.955 [2024-07-23 04:59:07.075692] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:06.955 [2024-07-23 04:59:07.075873] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:06.955 [2024-07-23 04:59:07.092684] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:06.955 [2024-07-23 04:59:07.092856] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:06.955 [2024-07-23 04:59:07.140974] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:06.955 [2024-07-23 04:59:07.169994] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:07.214 [2024-07-23 04:59:07.184961] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:07.214 [2024-07-23 04:59:07.185144] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:07.214 [2024-07-23 04:59:07.200852] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:07.214 [2024-07-23 04:59:07.201071] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:07.214 [2024-07-23 04:59:07.250240] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:07.214 [2024-07-23 04:59:07.299122] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(3) error ExpCmdSN=4 00:08:07.214 [2024-07-23 04:59:07.299285] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:07.214 [2024-07-23 04:59:07.316643] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:07.214 [2024-07-23 04:59:07.316802] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:07.214 [2024-07-23 04:59:07.348823] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:07.214 [2024-07-23 04:59:07.349054] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:07.215 [2024-07-23 04:59:07.365421] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:07.215 [2024-07-23 04:59:07.365585] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:07.215 [2024-07-23 04:59:07.382731] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:07.474 [2024-07-23 04:59:07.446579] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:08:07.474 [2024-07-23 04:59:07.533451] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:09.407 [2024-07-23 04:59:09.493202] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:09.407 [2024-07-23 04:59:09.529824] iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:09.407 [2024-07-23 04:59:09.529983] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:09.407 [2024-07-23 04:59:09.559326] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:08:09.407 [2024-07-23 04:59:09.576618] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:09.407 [2024-07-23 04:59:09.576955] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:09.407 [2024-07-23 04:59:09.593151] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:09.407 [2024-07-23 04:59:09.624971] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:09.407 [2024-07-23 04:59:09.625050] iscsi.c:3961:iscsi_handle_recovery_datain: *ERROR*: Initiator requests BegRun: 0x00000000, RunLength:0x00001000 greater than maximum DataSN: 0x00000004. 00:08:09.407 [2024-07-23 04:59:09.625078] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=10) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:08:09.407 [2024-07-23 04:59:09.625087] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:08:09.664 [2024-07-23 04:59:09.643246] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:09.664 [2024-07-23 04:59:09.643385] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:09.664 [2024-07-23 04:59:09.680472] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:09.664 [2024-07-23 04:59:09.709932] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:08:09.664 PDU 00:08:09.664 00000000 00 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:08:09.665 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:08:09.665 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:08:09.665 [2024-07-23 04:59:09.709989] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:08:09.665 [2024-07-23 04:59:09.759621] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:08:09.665 [2024-07-23 04:59:09.759788] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:09.665 [2024-07-23 04:59:09.759923] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:08:09.665 [2024-07-23 04:59:09.760074] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=6, MaxCmdSN=67) 00:08:09.665 [2024-07-23 04:59:09.760667] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:08:09.665 [2024-07-23 04:59:09.813196] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:09.665 [2024-07-23 04:59:09.830220] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:08:09.665 [2024-07-23 04:59:09.859754] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:09.665 [2024-07-23 04:59:09.859902] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:09.922 [2024-07-23 04:59:09.890842] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:09.922 [2024-07-23 04:59:09.904357] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:09.922 [2024-07-23 04:59:09.904491] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:09.922 [2024-07-23 04:59:09.920378] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:09.922 [2024-07-23 04:59:09.920695] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:09.922 [2024-07-23 04:59:09.954827] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:09.922 [2024-07-23 04:59:09.955015] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:09.922 [2024-07-23 04:59:10.008445] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:09.922 [2024-07-23 04:59:10.024579] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:09.922 [2024-07-23 04:59:10.042436] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:08:09.922 [2024-07-23 04:59:10.042575] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:08:09.922 [2024-07-23 04:59:10.058371] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:08:09.922 [2024-07-23 04:59:10.074925] iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 2745410467, and the dataout task tag is 2728567458 00:08:09.922 [2024-07-23 04:59:10.075077] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:09.922 [2024-07-23 04:59:10.075226] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:09.922 [2024-07-23 04:59:10.075291] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:08:09.922 [2024-07-23 04:59:10.091169] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:08:09.922 [2024-07-23 04:59:10.091289] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:09.922 [2024-07-23 04:59:10.107872] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:09.922 [2024-07-23 04:59:10.123102] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(341) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:08:09.922 [2024-07-23 04:59:10.123286] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(8) ignore (ExpCmdSN=9, MaxCmdSN=71) 00:08:09.922 [2024-07-23 04:59:10.123909] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:08:10.180 [2024-07-23 04:59:10.173929] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:08:10.180 [2024-07-23 04:59:10.207489] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:10.180 [2024-07-23 04:59:10.223475] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:10.180 [2024-07-23 04:59:10.270771] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=ffffffff 00:08:10.180 [2024-07-23 04:59:10.354130] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:10.180 [2024-07-23 04:59:10.370546] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key ImmediateDataa 00:08:10.180 [2024-07-23 04:59:10.388550] iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:10.180 [2024-07-23 04:59:10.388584] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:08:10.180 [2024-07-23 04:59:10.388610] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:08:10.437 [2024-07-23 04:59:10.406233] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:10.437 [2024-07-23 04:59:10.406605] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:10.437 [2024-07-23 04:59:10.425290] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:10.437 [2024-07-23 04:59:10.425430] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:10.437 [2024-07-23 04:59:10.441798] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:10.437 [2024-07-23 04:59:10.476923] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:08:10.437 [2024-07-23 04:59:10.508439] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:08:10.437 [2024-07-23 04:59:10.590952] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:10.437 [2024-07-23 04:59:10.608426] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:10.437 [2024-07-23 04:59:10.608757] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:08:10.437 [2024-07-23 04:59:10.627746] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:08:11.853 [2024-07-23 04:59:11.682284] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:08:12.787 [2024-07-23 04:59:12.662219] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=6, MaxCmdSN=68) 00:08:12.787 [2024-07-23 04:59:12.663167] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=7 00:08:12.787 [2024-07-23 04:59:12.682520] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=5, MaxCmdSN=68) 00:08:13.722 [2024-07-23 04:59:13.682743] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=6, MaxCmdSN=69) 00:08:13.722 [2024-07-23 04:59:13.682904] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=7, MaxCmdSN=70) 00:08:13.722 [2024-07-23 04:59:13.682922] iscsi.c:4028:iscsi_handle_status_snack: *ERROR*: Unable to find StatSN: 0x00000007. For a StatusSNACK, assuming this is a proactive SNACK for an untransmitted StatSN, ignoring. 00:08:13.722 [2024-07-23 04:59:13.682952] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=8 00:08:26.031 [2024-07-23 04:59:25.725661] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:08:26.031 [2024-07-23 04:59:25.748705] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:08:26.031 [2024-07-23 04:59:25.766658] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:26.031 [2024-07-23 04:59:25.768355] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:08:26.031 [2024-07-23 04:59:25.789634] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:26.031 [2024-07-23 04:59:25.808725] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:26.031 [2024-07-23 04:59:25.831972] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:08:26.031 [2024-07-23 04:59:25.872710] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:26.031 [2024-07-23 04:59:25.873803] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=64 00:08:26.031 [2024-07-23 04:59:25.893206] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1107296256) error ExpCmdSN=66 00:08:26.031 [2024-07-23 04:59:25.915624] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:26.031 [2024-07-23 04:59:25.933643] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=67 00:08:26.031 Skipping tc_ffp_15_2. It is known to fail. 00:08:26.031 Skipping tc_ffp_29_2. It is known to fail. 00:08:26.031 Skipping tc_ffp_29_3. It is known to fail. 00:08:26.031 Skipping tc_ffp_29_4. It is known to fail. 00:08:26.031 Skipping tc_err_1_1. It is known to fail. 00:08:26.031 Skipping tc_err_1_2. It is known to fail. 00:08:26.031 Skipping tc_err_2_8. It is known to fail. 00:08:26.031 Skipping tc_err_3_1. It is known to fail. 00:08:26.031 Skipping tc_err_3_2. It is known to fail. 00:08:26.031 Skipping tc_err_3_3. It is known to fail. 00:08:26.031 Skipping tc_err_3_4. It is known to fail. 00:08:26.031 Skipping tc_err_5_1. It is known to fail. 00:08:26.031 Skipping tc_login_3_1. It is known to fail. 00:08:26.031 Skipping tc_login_11_2. It is known to fail. 00:08:26.031 Skipping tc_login_11_4. It is known to fail. 00:08:26.031 Skipping tc_login_2_2. It is known to fail. 00:08:26.031 Skipping tc_login_29_1. It is known to fail. 00:08:26.031 04:59:25 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@62 -- # failed=0 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@67 -- # iscsicleanup 00:08:26.031 Cleaning up iSCSI connection 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:08:26.031 iscsiadm: No matching sessions found 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # true 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:08:26.031 iscsiadm: No records found 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # true 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # rm -rf 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@68 -- # killprocess 74910 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@948 -- # '[' -z 74910 ']' 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@952 -- # kill -0 74910 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # uname 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74910 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:26.031 killing process with pid 74910 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74910' 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@967 -- # kill 74910 00:08:26.031 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@972 -- # wait 74910 00:08:26.291 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@69 -- # delete_tmp_conf_files 00:08:26.291 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@12 -- # rm -f /usr/local/etc/its.conf 00:08:26.291 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@70 -- # iscsitestfini 00:08:26.291 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:08:26.291 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@71 -- # exit 0 00:08:26.291 00:08:26.291 real 0m25.229s 00:08:26.291 user 0m41.337s 00:08:26.291 sys 0m2.296s 00:08:26.291 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.291 ************************************ 00:08:26.291 END TEST iscsi_tgt_calsoft 00:08:26.291 ************************************ 00:08:26.291 04:59:26 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:08:26.291 04:59:26 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:08:26.291 04:59:26 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@31 -- # run_test iscsi_tgt_filesystem /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:08:26.291 04:59:26 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:26.291 04:59:26 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.291 04:59:26 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:26.291 ************************************ 00:08:26.291 START TEST iscsi_tgt_filesystem 00:08:26.291 ************************************ 00:08:26.291 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:08:26.553 * Looking for test storage... 00:08:26.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/setup/common.sh 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:26.553 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:26.554 #define SPDK_CONFIG_H 00:08:26.554 #define SPDK_CONFIG_APPS 1 00:08:26.554 #define SPDK_CONFIG_ARCH native 00:08:26.554 #undef SPDK_CONFIG_ASAN 00:08:26.554 #undef SPDK_CONFIG_AVAHI 00:08:26.554 #undef SPDK_CONFIG_CET 00:08:26.554 #define SPDK_CONFIG_COVERAGE 1 00:08:26.554 #define SPDK_CONFIG_CROSS_PREFIX 00:08:26.554 #undef SPDK_CONFIG_CRYPTO 00:08:26.554 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:26.554 #undef SPDK_CONFIG_CUSTOMOCF 00:08:26.554 #undef SPDK_CONFIG_DAOS 00:08:26.554 #define SPDK_CONFIG_DAOS_DIR 00:08:26.554 #define SPDK_CONFIG_DEBUG 1 00:08:26.554 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:26.554 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:26.554 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:26.554 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:26.554 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:26.554 #undef SPDK_CONFIG_DPDK_UADK 00:08:26.554 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:26.554 #define SPDK_CONFIG_EXAMPLES 1 00:08:26.554 #undef SPDK_CONFIG_FC 00:08:26.554 #define SPDK_CONFIG_FC_PATH 00:08:26.554 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:26.554 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:26.554 #undef SPDK_CONFIG_FUSE 00:08:26.554 #undef SPDK_CONFIG_FUZZER 00:08:26.554 #define SPDK_CONFIG_FUZZER_LIB 00:08:26.554 #undef SPDK_CONFIG_GOLANG 00:08:26.554 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:26.554 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:26.554 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:26.554 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:26.554 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:26.554 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:26.554 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:26.554 #define SPDK_CONFIG_IDXD 1 00:08:26.554 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:26.554 #undef SPDK_CONFIG_IPSEC_MB 00:08:26.554 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:26.554 #define SPDK_CONFIG_ISAL 1 00:08:26.554 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:26.554 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:26.554 #define SPDK_CONFIG_LIBDIR 00:08:26.554 #undef SPDK_CONFIG_LTO 00:08:26.554 #define SPDK_CONFIG_MAX_LCORES 128 00:08:26.554 #define SPDK_CONFIG_NVME_CUSE 1 00:08:26.554 #undef SPDK_CONFIG_OCF 00:08:26.554 #define SPDK_CONFIG_OCF_PATH 00:08:26.554 #define SPDK_CONFIG_OPENSSL_PATH 00:08:26.554 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:26.554 #define SPDK_CONFIG_PGO_DIR 00:08:26.554 #undef SPDK_CONFIG_PGO_USE 00:08:26.554 #define SPDK_CONFIG_PREFIX /usr/local 00:08:26.554 #undef SPDK_CONFIG_RAID5F 00:08:26.554 #define SPDK_CONFIG_RBD 1 00:08:26.554 #define SPDK_CONFIG_RDMA 1 00:08:26.554 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:26.554 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:26.554 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:26.554 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:26.554 #define SPDK_CONFIG_SHARED 1 00:08:26.554 #undef SPDK_CONFIG_SMA 00:08:26.554 #define SPDK_CONFIG_TESTS 1 00:08:26.554 #undef SPDK_CONFIG_TSAN 00:08:26.554 #define SPDK_CONFIG_UBLK 1 00:08:26.554 #define SPDK_CONFIG_UBSAN 1 00:08:26.554 #undef SPDK_CONFIG_UNIT_TESTS 00:08:26.554 #undef SPDK_CONFIG_URING 00:08:26.554 #define SPDK_CONFIG_URING_PATH 00:08:26.554 #undef SPDK_CONFIG_URING_ZNS 00:08:26.554 #undef SPDK_CONFIG_USDT 00:08:26.554 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:26.554 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:26.554 #undef SPDK_CONFIG_VFIO_USER 00:08:26.554 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:26.554 #define SPDK_CONFIG_VHOST 1 00:08:26.554 #define SPDK_CONFIG_VIRTIO 1 00:08:26.554 #undef SPDK_CONFIG_VTUNE 00:08:26.554 #define SPDK_CONFIG_VTUNE_DIR 00:08:26.554 #define SPDK_CONFIG_WERROR 1 00:08:26.554 #define SPDK_CONFIG_WPDK_DIR 00:08:26.554 #undef SPDK_CONFIG_XNVME 00:08:26.554 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # uname -s 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@58 -- # : 1 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:26.554 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@70 -- # : 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@76 -- # : 1 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@78 -- # : 1 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@86 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@92 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@94 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@104 -- # : 1 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@124 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@154 -- # : 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@167 -- # : 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:26.555 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # [[ -z 75623 ]] 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # kill -0 75623 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.TBA6Eo 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem /tmp/spdk.TBA6Eo/tests/filesystem /tmp/spdk.TBA6Eo 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264516608 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2496167936 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10989568 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:26.556 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13218553856 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5825355776 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13218553856 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5825355776 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267744256 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=147456 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt/output 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=96794087424 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=2908692480 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:26.557 * Looking for test storage... 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # target_space=13218553856 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:26.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:26.557 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@11 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@13 -- # iscsitestinit 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@29 -- # timing_enter start_iscsi_tgt 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@32 -- # pid=75660 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@33 -- # echo 'Process pid: 75660' 00:08:26.558 Process pid: 75660 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@35 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@37 -- # waitforlisten 75660 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@829 -- # '[' -z 75660 ']' 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.558 04:59:26 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.819 [2024-07-23 04:59:26.774987] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:08:26.819 [2024-07-23 04:59:26.775075] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75660 ] 00:08:26.819 [2024-07-23 04:59:26.914498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.819 [2024-07-23 04:59:26.980638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.819 [2024-07-23 04:59:26.980765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.819 [2024-07-23 04:59:26.980900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.819 [2024-07-23 04:59:26.980901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.819 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.819 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@862 -- # return 0 00:08:26.819 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@38 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:08:26.819 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.819 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.819 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.819 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@39 -- # rpc_cmd framework_start_init 00:08:26.819 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.819 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@40 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:27.079 iscsi_tgt is listening. Running tests... 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@42 -- # timing_exit start_iscsi_tgt 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # get_first_nvme_bdf 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # bdfs=() 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # local bdfs 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # local bdfs 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:27.079 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:27.337 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:08:27.337 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:27.337 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:08:27.337 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # bdf=0000:00:10.0 00:08:27.337 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@45 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@46 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@47 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:00:10.0 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.338 Nvme0n1 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # rpc_cmd bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # ls_guid=237dd511-fe8f-41fa-82e0-f31560d937d1 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # get_lvs_free_mb 237dd511-fe8f-41fa-82e0-f31560d937d1 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1364 -- # local lvs_uuid=237dd511-fe8f-41fa-82e0-f31560d937d1 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1365 -- # local lvs_info 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1366 -- # local fc 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1367 -- # local cs 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_lvol_get_lvstores 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:08:27.338 { 00:08:27.338 "uuid": "237dd511-fe8f-41fa-82e0-f31560d937d1", 00:08:27.338 "name": "lvs_0", 00:08:27.338 "base_bdev": "Nvme0n1", 00:08:27.338 "total_data_clusters": 1278, 00:08:27.338 "free_clusters": 1278, 00:08:27.338 "block_size": 4096, 00:08:27.338 "cluster_size": 4194304 00:08:27.338 } 00:08:27.338 ]' 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="237dd511-fe8f-41fa-82e0-f31560d937d1") .free_clusters' 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # fc=1278 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="237dd511-fe8f-41fa-82e0-f31560d937d1") .cluster_size' 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # cs=4194304 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1373 -- # free_mb=5112 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1374 -- # echo 5112 00:08:27.338 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # free_mb=5112 00:08:27.595 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@52 -- # '[' 5112 -gt 2048 ']' 00:08:27.595 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@53 -- # rpc_cmd bdev_lvol_create -u 237dd511-fe8f-41fa-82e0-f31560d937d1 lbd_0 2048 00:08:27.595 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.595 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.595 6eafa2e3-9102-4d56-b27e-917f6f0a7e28 00:08:27.595 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.595 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@61 -- # lvol_name=lvs_0/lbd_0 00:08:27.595 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@62 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias lvs_0/lbd_0:0 1:2 256 -d 00:08:27.595 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.595 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.595 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.595 04:59:27 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@63 -- # sleep 1 00:08:28.532 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@65 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:28.532 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:08:28.532 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@66 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:28.532 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:28.532 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:28.532 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@67 -- # waitforiscsidevices 1 00:08:28.532 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:28.532 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:28.532 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:28.532 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:28.532 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:28.532 [2024-07-23 04:59:28.701217] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:28.532 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # true 00:08:28.532 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # n=0 00:08:28.532 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 1 ']' 00:08:28.532 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@121 -- # sleep 0.1 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i++ )) 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # n=1 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@123 -- # return 0 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # get_bdev_size lvs_0/lbd_0 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1378 -- # local bdev_name=lvs_0/lbd_0 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1380 -- # local bs 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1381 -- # local nb 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b lvs_0/lbd_0 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:28.791 { 00:08:28.791 "name": "6eafa2e3-9102-4d56-b27e-917f6f0a7e28", 00:08:28.791 "aliases": [ 00:08:28.791 "lvs_0/lbd_0" 00:08:28.791 ], 00:08:28.791 "product_name": "Logical Volume", 00:08:28.791 "block_size": 4096, 00:08:28.791 "num_blocks": 524288, 00:08:28.791 "uuid": "6eafa2e3-9102-4d56-b27e-917f6f0a7e28", 00:08:28.791 "assigned_rate_limits": { 00:08:28.791 "rw_ios_per_sec": 0, 00:08:28.791 "rw_mbytes_per_sec": 0, 00:08:28.791 "r_mbytes_per_sec": 0, 00:08:28.791 "w_mbytes_per_sec": 0 00:08:28.791 }, 00:08:28.791 "claimed": false, 00:08:28.791 "zoned": false, 00:08:28.791 "supported_io_types": { 00:08:28.791 "read": true, 00:08:28.791 "write": true, 00:08:28.791 "unmap": true, 00:08:28.791 "flush": false, 00:08:28.791 "reset": true, 00:08:28.791 "nvme_admin": false, 00:08:28.791 "nvme_io": false, 00:08:28.791 "nvme_io_md": false, 00:08:28.791 "write_zeroes": true, 00:08:28.791 "zcopy": false, 00:08:28.791 "get_zone_info": false, 00:08:28.791 "zone_management": false, 00:08:28.791 "zone_append": false, 00:08:28.791 "compare": false, 00:08:28.791 "compare_and_write": false, 00:08:28.791 "abort": false, 00:08:28.791 "seek_hole": true, 00:08:28.791 "seek_data": true, 00:08:28.791 "copy": false, 00:08:28.791 "nvme_iov_md": false 00:08:28.791 }, 00:08:28.791 "driver_specific": { 00:08:28.791 "lvol": { 00:08:28.791 "lvol_store_uuid": "237dd511-fe8f-41fa-82e0-f31560d937d1", 00:08:28.791 "base_bdev": "Nvme0n1", 00:08:28.791 "thin_provision": false, 00:08:28.791 "num_allocated_clusters": 512, 00:08:28.791 "snapshot": false, 00:08:28.791 "clone": false, 00:08:28.791 "esnap_clone": false 00:08:28.791 } 00:08:28.791 } 00:08:28.791 } 00:08:28.791 ]' 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # bs=4096 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # nb=524288 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1387 -- # bdev_size=2048 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1388 -- # echo 2048 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # lvol_size=2147483648 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@70 -- # trap 'iscsicleanup; remove_backends; umount /mnt/device; rm -rf /mnt/device; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@72 -- # mkdir -p /mnt/device 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # iscsiadm -m session -P 3 00:08:28.791 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # grep 'Attached scsi disk' 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # awk '{print $4}' 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # dev=sda 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@76 -- # waitforfile /dev/sda 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1265 -- # local i=0 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1276 -- # return 0 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # sec_size_to_bytes sda 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@76 -- # local dev=sda 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@78 -- # [[ -e /sys/block/sda ]] 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@80 -- # echo 2147483648 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # dev_size=2147483648 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@80 -- # (( lvol_size == dev_size )) 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@81 -- # parted -s /dev/sda mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:28.792 04:59:28 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@82 -- # sleep 1 00:08:28.792 [2024-07-23 04:59:28.991368] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:30.169 04:59:29 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@144 -- # run_test iscsi_tgt_filesystem_ext4 filesystem_test ext4 00:08:30.169 04:59:29 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:30.169 04:59:29 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.169 04:59:29 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.169 ************************************ 00:08:30.169 START TEST iscsi_tgt_filesystem_ext4 00:08:30.169 ************************************ 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1123 -- # filesystem_test ext4 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@89 -- # fstype=ext4 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@91 -- # make_filesystem ext4 /dev/sda1 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda1 00:08:30.169 mke2fs 1.46.5 (30-Dec-2021) 00:08:30.169 Discarding device blocks: 0/522240 done 00:08:30.169 Creating filesystem with 522240 4k blocks and 130560 inodes 00:08:30.169 Filesystem UUID: 59e0d3f8-13ee-457a-b03c-8d99f2c8b5c1 00:08:30.169 Superblock backups stored on blocks: 00:08:30.169 32768, 98304, 163840, 229376, 294912 00:08:30.169 00:08:30.169 Allocating group tables: 0/16 done 00:08:30.169 Writing inode tables: 0/16 done 00:08:30.169 Creating journal (8192 blocks): done 00:08:30.169 Writing superblocks and filesystem accounting information: 0/16 done 00:08:30.169 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:08:30.169 04:59:30 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:08:30.169 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:08:30.169 fio-3.35 00:08:30.169 Starting 1 thread 00:08:30.169 job0: Laying out IO file (1 file / 1024MiB) 00:08:45.045 00:08:45.045 job0: (groupid=0, jobs=1): err= 0: pid=75819: Tue Jul 23 04:59:44 2024 00:08:45.045 write: IOPS=18.8k, BW=73.4MiB/s (76.9MB/s)(1024MiB/13955msec); 0 zone resets 00:08:45.045 slat (usec): min=4, max=32716, avg=18.05, stdev=154.79 00:08:45.045 clat (usec): min=907, max=43185, avg=3387.25, stdev=1713.71 00:08:45.045 lat (usec): min=931, max=43264, avg=3405.30, stdev=1725.07 00:08:45.045 clat percentiles (usec): 00:08:45.045 | 1.00th=[ 1729], 5.00th=[ 1942], 10.00th=[ 2180], 20.00th=[ 2474], 00:08:45.045 | 30.00th=[ 2868], 40.00th=[ 3130], 50.00th=[ 3326], 60.00th=[ 3523], 00:08:45.045 | 70.00th=[ 3720], 80.00th=[ 3949], 90.00th=[ 4359], 95.00th=[ 4752], 00:08:45.045 | 99.00th=[ 5342], 99.50th=[ 6718], 99.90th=[24773], 99.95th=[38536], 00:08:45.045 | 99.99th=[41681] 00:08:45.045 bw ( KiB/s): min=61016, max=79192, per=99.66%, avg=74885.33, stdev=4424.27, samples=27 00:08:45.045 iops : min=15254, max=19798, avg=18721.33, stdev=1106.07, samples=27 00:08:45.045 lat (usec) : 1000=0.01% 00:08:45.045 lat (msec) : 2=6.03%, 4=75.30%, 10=18.20%, 20=0.08%, 50=0.38% 00:08:45.045 cpu : usr=5.63%, sys=20.86%, ctx=24196, majf=0, minf=1 00:08:45.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:08:45.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:08:45.045 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.045 latency : target=0, window=0, percentile=100.00%, depth=64 00:08:45.045 00:08:45.045 Run status group 0 (all jobs): 00:08:45.045 WRITE: bw=73.4MiB/s (76.9MB/s), 73.4MiB/s-73.4MiB/s (76.9MB/s-76.9MB/s), io=1024MiB (1074MB), run=13955-13955msec 00:08:45.045 00:08:45.045 Disk stats (read/write): 00:08:45.045 sda: ios=0/259564, merge=0/2171, ticks=0/784902, in_queue=784902, util=99.33% 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:08:45.045 Logging out of session [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:45.045 Logout of [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=0 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:45.045 iscsiadm: No active sessions. 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # true 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=0 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:45.045 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:45.045 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:45.045 [2024-07-23 04:59:44.617628] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=1 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # dev=sda 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1265 -- # local i=0 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1276 -- # return 0 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:08:45.045 File existed. 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:08:45.045 04:59:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:08:45.045 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:08:45.045 fio-3.35 00:08:45.045 Starting 1 thread 00:09:06.970 00:09:06.970 job0: (groupid=0, jobs=1): err= 0: pid=76071: Tue Jul 23 05:00:04 2024 00:09:06.970 read: IOPS=18.4k, BW=71.8MiB/s (75.3MB/s)(1437MiB/20004msec) 00:09:06.970 slat (usec): min=2, max=3266, avg= 8.69, stdev=38.87 00:09:06.970 clat (usec): min=496, max=26830, avg=3467.95, stdev=1069.25 00:09:06.970 lat (usec): min=507, max=28068, avg=3476.64, stdev=1075.11 00:09:06.970 clat percentiles (usec): 00:09:06.970 | 1.00th=[ 1860], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2606], 00:09:06.970 | 30.00th=[ 2868], 40.00th=[ 3195], 50.00th=[ 3392], 60.00th=[ 3654], 00:09:06.970 | 70.00th=[ 3916], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5014], 00:09:06.970 | 99.00th=[ 5669], 99.50th=[ 6259], 99.90th=[14615], 99.95th=[17957], 00:09:06.970 | 99.99th=[22676] 00:09:06.970 bw ( KiB/s): min=42920, max=79144, per=100.00%, avg=73809.28, stdev=5848.51, samples=39 00:09:06.970 iops : min=10730, max=19786, avg=18452.36, stdev=1462.16, samples=39 00:09:06.970 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:06.970 lat (msec) : 2=2.31%, 4=70.50%, 10=26.98%, 20=0.14%, 50=0.04% 00:09:06.970 cpu : usr=5.86%, sys=13.56%, ctx=32673, majf=0, minf=65 00:09:06.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:09:06.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:09:06.970 issued rwts: total=367838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.970 latency : target=0, window=0, percentile=100.00%, depth=64 00:09:06.970 00:09:06.970 Run status group 0 (all jobs): 00:09:06.970 READ: bw=71.8MiB/s (75.3MB/s), 71.8MiB/s-71.8MiB/s (75.3MB/s-75.3MB/s), io=1437MiB (1507MB), run=20004-20004msec 00:09:06.970 00:09:06.970 Disk stats (read/write): 00:09:06.970 sda: ios=365232/5, merge=1389/2, ticks=1198028/6, in_queue=1198035, util=99.59% 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:09:06.970 00:09:06.970 real 0m34.931s 00:09:06.970 user 0m2.196s 00:09:06.970 sys 0m5.858s 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 ************************************ 00:09:06.970 END TEST iscsi_tgt_filesystem_ext4 00:09:06.970 ************************************ 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@145 -- # run_test iscsi_tgt_filesystem_btrfs filesystem_test btrfs 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 ************************************ 00:09:06.970 START TEST iscsi_tgt_filesystem_btrfs 00:09:06.970 ************************************ 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1123 -- # filesystem_test btrfs 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@89 -- # fstype=btrfs 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@91 -- # make_filesystem btrfs /dev/sda1 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:06.970 05:00:04 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/sda1 00:09:06.970 btrfs-progs v6.6.2 00:09:06.970 See https://btrfs.readthedocs.io for more information. 00:09:06.970 00:09:06.970 Performing full device TRIM /dev/sda1 (1.99GiB) ... 00:09:06.970 NOTE: several default settings have changed in version 5.15, please make sure 00:09:06.970 this does not affect your deployments: 00:09:06.970 - DUP for metadata (-m dup) 00:09:06.970 - enabled no-holes (-O no-holes) 00:09:06.970 - enabled free-space-tree (-R free-space-tree) 00:09:06.970 00:09:06.970 Label: (null) 00:09:06.970 UUID: 8010f96f-6e48-43a0-bc86-4e8f6cf26d8d 00:09:06.970 Node size: 16384 00:09:06.970 Sector size: 4096 00:09:06.970 Filesystem size: 1.99GiB 00:09:06.970 Block group profiles: 00:09:06.970 Data: single 8.00MiB 00:09:06.970 Metadata: DUP 102.00MiB 00:09:06.970 System: DUP 8.00MiB 00:09:06.970 SSD detected: yes 00:09:06.970 Zoned device: no 00:09:06.970 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:06.970 Runtime features: free-space-tree 00:09:06.970 Checksum: crc32c 00:09:06.970 Number of devices: 1 00:09:06.970 Devices: 00:09:06.970 ID SIZE PATH 00:09:06.970 1 1.99GiB /dev/sda1 00:09:06.970 00:09:06.970 05:00:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:06.970 05:00:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:09:06.970 05:00:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:09:06.970 05:00:05 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:09:06.970 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:09:06.970 fio-3.35 00:09:06.970 Starting 1 thread 00:09:06.970 job0: Laying out IO file (1 file / 1024MiB) 00:09:25.057 00:09:25.057 job0: (groupid=0, jobs=1): err= 0: pid=76337: Tue Jul 23 05:00:22 2024 00:09:25.057 write: IOPS=15.4k, BW=60.0MiB/s (62.9MB/s)(1024MiB/17076msec); 0 zone resets 00:09:25.057 slat (usec): min=6, max=4166, avg=43.97, stdev=87.83 00:09:25.057 clat (usec): min=1173, max=14051, avg=4122.74, stdev=1284.10 00:09:25.057 lat (usec): min=1229, max=14111, avg=4166.70, stdev=1294.33 00:09:25.057 clat percentiles (usec): 00:09:25.057 | 1.00th=[ 1844], 5.00th=[ 2245], 10.00th=[ 2573], 20.00th=[ 3032], 00:09:25.057 | 30.00th=[ 3392], 40.00th=[ 3720], 50.00th=[ 4015], 60.00th=[ 4293], 00:09:25.057 | 70.00th=[ 4555], 80.00th=[ 5014], 90.00th=[ 5866], 95.00th=[ 6521], 00:09:25.057 | 99.00th=[ 7898], 99.50th=[ 8455], 99.90th=[ 9896], 99.95th=[10552], 00:09:25.057 | 99.99th=[12387] 00:09:25.057 bw ( KiB/s): min=54016, max=70568, per=99.97%, avg=61388.47, stdev=3572.49, samples=34 00:09:25.057 iops : min=13504, max=17642, avg=15347.12, stdev=893.12, samples=34 00:09:25.057 lat (msec) : 2=1.90%, 4=47.38%, 10=50.63%, 20=0.09% 00:09:25.057 cpu : usr=6.54%, sys=35.33%, ctx=49126, majf=0, minf=1 00:09:25.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:09:25.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:09:25.057 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.057 latency : target=0, window=0, percentile=100.00%, depth=64 00:09:25.057 00:09:25.057 Run status group 0 (all jobs): 00:09:25.057 WRITE: bw=60.0MiB/s (62.9MB/s), 60.0MiB/s-60.0MiB/s (62.9MB/s-62.9MB/s), io=1024MiB (1074MB), run=17076-17076msec 00:09:25.057 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:09:25.057 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:09:25.057 Logging out of session [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:25.058 Logout of [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:25.058 iscsiadm: No active sessions. 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # true 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=0 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:09:25.058 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:25.058 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # true 00:09:25.058 [2024-07-23 05:00:22.772226] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=0 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 1 ']' 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@121 -- # sleep 0.1 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i++ )) 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=1 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1265 -- # local i=0 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1276 -- # return 0 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:09:25.058 File existed. 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:09:25.058 05:00:22 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:09:25.058 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:09:25.058 fio-3.35 00:09:25.058 Starting 1 thread 00:09:43.199 00:09:43.199 job0: (groupid=0, jobs=1): err= 0: pid=76586: Tue Jul 23 05:00:43 2024 00:09:43.199 read: IOPS=16.7k, BW=65.1MiB/s (68.2MB/s)(1301MiB/20003msec) 00:09:43.199 slat (usec): min=4, max=2636, avg= 9.44, stdev=22.56 00:09:43.199 clat (usec): min=936, max=34190, avg=3828.95, stdev=1087.21 00:09:43.199 lat (usec): min=958, max=34847, avg=3838.39, stdev=1092.82 00:09:43.199 clat percentiles (usec): 00:09:43.199 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2573], 20.00th=[ 2900], 00:09:43.199 | 30.00th=[ 3195], 40.00th=[ 3490], 50.00th=[ 3818], 60.00th=[ 4047], 00:09:43.199 | 70.00th=[ 4359], 80.00th=[ 4686], 90.00th=[ 5145], 95.00th=[ 5407], 00:09:43.199 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 9896], 99.95th=[18220], 00:09:43.199 | 99.99th=[26870] 00:09:43.199 bw ( KiB/s): min=49256, max=73712, per=100.00%, avg=66664.62, stdev=3229.83, samples=39 00:09:43.199 iops : min=12314, max=18428, avg=16666.15, stdev=807.46, samples=39 00:09:43.199 lat (usec) : 1000=0.01% 00:09:43.199 lat (msec) : 2=0.51%, 4=57.38%, 10=42.01%, 20=0.06%, 50=0.04% 00:09:43.199 cpu : usr=4.42%, sys=14.69%, ctx=44891, majf=0, minf=65 00:09:43.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:09:43.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:09:43.199 issued rwts: total=333116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.199 latency : target=0, window=0, percentile=100.00%, depth=64 00:09:43.199 00:09:43.199 Run status group 0 (all jobs): 00:09:43.199 READ: bw=65.1MiB/s (68.2MB/s), 65.1MiB/s-65.1MiB/s (68.2MB/s-68.2MB/s), io=1301MiB (1364MB), run=20003-20003msec 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:09:43.199 ************************************ 00:09:43.199 END TEST iscsi_tgt_filesystem_btrfs 00:09:43.199 ************************************ 00:09:43.199 00:09:43.199 real 0m38.246s 00:09:43.199 user 0m2.261s 00:09:43.199 sys 0m9.330s 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@146 -- # run_test iscsi_tgt_filesystem_xfs filesystem_test xfs 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:43.199 ************************************ 00:09:43.199 START TEST iscsi_tgt_filesystem_xfs 00:09:43.199 ************************************ 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1123 -- # filesystem_test xfs 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@89 -- # fstype=xfs 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@91 -- # make_filesystem xfs /dev/sda1 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:43.199 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/sda1 00:09:43.199 meta-data=/dev/sda1 isize=512 agcount=4, agsize=130560 blks 00:09:43.199 = sectsz=4096 attr=2, projid32bit=1 00:09:43.199 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:43.199 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:43.199 data = bsize=4096 blocks=522240, imaxpct=25 00:09:43.199 = sunit=0 swidth=0 blks 00:09:43.199 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:43.199 log =internal log bsize=4096 blocks=16384, version=2 00:09:43.199 = sectsz=4096 sunit=1 blks, lazy-count=1 00:09:43.199 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:43.768 Discarding blocks...Done. 00:09:43.768 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:43.768 05:00:43 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:09:44.336 05:00:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:09:44.336 05:00:44 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:09:44.336 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:09:44.336 fio-3.35 00:09:44.336 Starting 1 thread 00:09:44.336 job0: Laying out IO file (1 file / 1024MiB) 00:10:02.422 00:10:02.422 job0: (groupid=0, jobs=1): err= 0: pid=76849: Tue Jul 23 05:01:00 2024 00:10:02.422 write: IOPS=16.5k, BW=64.3MiB/s (67.5MB/s)(1024MiB/15919msec); 0 zone resets 00:10:02.422 slat (usec): min=2, max=2447, avg=20.20, stdev=117.85 00:10:02.422 clat (usec): min=823, max=9796, avg=3864.88, stdev=983.39 00:10:02.422 lat (usec): min=843, max=9816, avg=3885.08, stdev=988.79 00:10:02.422 clat percentiles (usec): 00:10:02.422 | 1.00th=[ 2008], 5.00th=[ 2245], 10.00th=[ 2540], 20.00th=[ 2868], 00:10:02.422 | 30.00th=[ 3359], 40.00th=[ 3654], 50.00th=[ 3916], 60.00th=[ 4178], 00:10:02.423 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 5080], 95.00th=[ 5604], 00:10:02.423 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 7046], 99.95th=[ 7504], 00:10:02.423 | 99.99th=[ 8586] 00:10:02.423 bw ( KiB/s): min=62456, max=70256, per=99.93%, avg=65821.97, stdev=1509.60, samples=31 00:10:02.423 iops : min=15614, max=17564, avg=16455.48, stdev=377.41, samples=31 00:10:02.423 lat (usec) : 1000=0.01% 00:10:02.423 lat (msec) : 2=0.95%, 4=52.67%, 10=46.38% 00:10:02.423 cpu : usr=4.98%, sys=11.11%, ctx=23060, majf=0, minf=1 00:10:02.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:02.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:02.423 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.423 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:02.423 00:10:02.423 Run status group 0 (all jobs): 00:10:02.423 WRITE: bw=64.3MiB/s (67.5MB/s), 64.3MiB/s-64.3MiB/s (67.5MB/s-67.5MB/s), io=1024MiB (1074MB), run=15919-15919msec 00:10:02.423 00:10:02.423 Disk stats (read/write): 00:10:02.423 sda: ios=0/260336, merge=0/1152, ticks=0/900455, in_queue=900454, util=99.47% 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:10:02.423 Logging out of session [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:02.423 Logout of [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:02.423 iscsiadm: No active sessions. 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # true 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=0 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:02.423 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:02.423 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:02.423 [2024-07-23 05:01:00.691824] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=1 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1265 -- # local i=0 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1276 -- # return 0 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:10:02.423 File existed. 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:10:02.423 05:01:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:10:02.423 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:10:02.423 fio-3.35 00:10:02.423 Starting 1 thread 00:10:24.383 00:10:24.383 job0: (groupid=0, jobs=1): err= 0: pid=77065: Tue Jul 23 05:01:21 2024 00:10:24.383 read: IOPS=16.4k, BW=64.2MiB/s (67.3MB/s)(1284MiB/20003msec) 00:10:24.383 slat (usec): min=2, max=272, avg= 7.57, stdev= 7.90 00:10:24.383 clat (usec): min=1203, max=12993, avg=3885.69, stdev=1028.47 00:10:24.383 lat (usec): min=1219, max=12999, avg=3893.26, stdev=1028.11 00:10:24.383 clat percentiles (usec): 00:10:24.383 | 1.00th=[ 2089], 5.00th=[ 2376], 10.00th=[ 2573], 20.00th=[ 2868], 00:10:24.383 | 30.00th=[ 3261], 40.00th=[ 3523], 50.00th=[ 3851], 60.00th=[ 4113], 00:10:24.383 | 70.00th=[ 4424], 80.00th=[ 4817], 90.00th=[ 5276], 95.00th=[ 5669], 00:10:24.383 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 7439], 99.95th=[ 8029], 00:10:24.383 | 99.99th=[ 9503] 00:10:24.383 bw ( KiB/s): min=60192, max=75472, per=99.79%, avg=65596.92, stdev=3621.40, samples=39 00:10:24.383 iops : min=15048, max=18868, avg=16399.23, stdev=905.35, samples=39 00:10:24.383 lat (msec) : 2=0.54%, 4=55.12%, 10=44.33%, 20=0.01% 00:10:24.383 cpu : usr=5.27%, sys=12.34%, ctx=29404, majf=0, minf=65 00:10:24.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:24.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:24.383 issued rwts: total=328713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:24.383 00:10:24.383 Run status group 0 (all jobs): 00:10:24.383 READ: bw=64.2MiB/s (67.3MB/s), 64.2MiB/s-64.2MiB/s (67.3MB/s-67.3MB/s), io=1284MiB (1346MB), run=20003-20003msec 00:10:24.383 00:10:24.383 Disk stats (read/write): 00:10:24.383 sda: ios=325217/0, merge=1363/0, ticks=1222910/0, in_queue=1222910, util=99.59% 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:10:24.383 00:10:24.383 real 0m37.808s 00:10:24.383 user 0m2.106s 00:10:24.383 sys 0m4.497s 00:10:24.383 ************************************ 00:10:24.383 END TEST iscsi_tgt_filesystem_xfs 00:10:24.383 ************************************ 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@148 -- # rm -rf /mnt/device 00:10:24.383 Cleaning up iSCSI connection 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@152 -- # iscsicleanup 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:10:24.383 Logging out of session [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:24.383 Logout of [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@983 -- # rm -rf 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@153 -- # remove_backends 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@17 -- # echo 'INFO: Removing lvol bdev' 00:10:24.383 INFO: Removing lvol bdev 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@18 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:24.383 [2024-07-23 05:01:21.209194] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (6eafa2e3-9102-4d56-b27e-917f6f0a7e28) received event(SPDK_BDEV_EVENT_REMOVE) 00:10:24.383 INFO: Removing lvol stores 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@20 -- # echo 'INFO: Removing lvol stores' 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@21 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:24.383 INFO: Removing NVMe 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@23 -- # echo 'INFO: Removing NVMe' 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@24 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@26 -- # return 0 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@154 -- # killprocess 75660 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@948 -- # '[' -z 75660 ']' 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@952 -- # kill -0 75660 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # uname 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75660 00:10:24.383 killing process with pid 75660 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75660' 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@967 -- # kill 75660 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@972 -- # wait 75660 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@155 -- # iscsitestfini 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:24.383 ************************************ 00:10:24.383 END TEST iscsi_tgt_filesystem 00:10:24.383 ************************************ 00:10:24.383 00:10:24.383 real 1m55.165s 00:10:24.383 user 7m19.112s 00:10:24.383 sys 0m34.222s 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.383 05:01:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:24.383 05:01:21 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:10:24.383 05:01:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@32 -- # run_test chap_during_discovery /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:10:24.383 05:01:21 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:24.383 05:01:21 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.383 05:01:21 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:24.383 ************************************ 00:10:24.383 START TEST chap_during_discovery 00:10:24.383 ************************************ 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:10:24.383 * Looking for test storage... 00:10:24.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:10:24.383 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@13 -- # USER=chapo 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@14 -- # MUSER=mchapo 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@15 -- # PASS=123456789123 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@16 -- # MPASS=321978654321 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@19 -- # iscsitestinit 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@21 -- # set_up_iscsi_target 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@142 -- # pid=77365 00:10:24.384 iSCSI target launched. pid: 77365 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 77365' 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@145 -- # waitforlisten 77365 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@829 -- # '[' -z 77365 ']' 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.384 05:01:21 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.384 [2024-07-23 05:01:21.882480] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:24.384 [2024-07-23 05:01:21.882603] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77365 ] 00:10:24.384 [2024-07-23 05:01:22.170770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.384 [2024-07-23 05:01:22.230620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.384 05:01:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.384 05:01:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@862 -- # return 0 00:10:24.384 05:01:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:10:24.384 05:01:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.384 05:01:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.384 05:01:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.384 05:01:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:10:24.384 05:01:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.384 05:01:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.384 iscsi_tgt is listening. Running tests... 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.384 Malloc0 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.384 05:01:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@155 -- # sleep 1 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:24.384 configuring target for bideerctional authentication 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@24 -- # echo 'configuring target for bideerctional authentication' 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@95 -- # '[' 0 -eq 1 ']' 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 1 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.384 05:01:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.385 executing discovery without adding credential to initiator - we expect failure 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@27 -- # rc=0 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:24.385 iscsiadm: Login failed to authenticate with target 00:10:24.385 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:10:24.385 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # rc=24 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@29 -- # '[' 24 -eq 0 ']' 00:10:24.385 configuring initiator for bideerctional authentication 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@35 -- # echo 'configuring initiator for bideerctional authentication' 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@36 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:10:24.385 iscsiadm: No matching sessions found 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:10:24.385 iscsiadm: No records found 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # true 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:10:24.385 05:01:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:10:27.672 05:01:27 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:10:27.672 05:01:27 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:10:28.239 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:28.239 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@116 -- # '[' 0 -eq 1 ']' 00:10:28.239 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:10:28.239 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:28.239 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:10:28.239 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:10:28.239 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:10:28.239 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:10:28.239 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:10:28.240 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:10:28.240 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:10:28.240 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@135 -- # restart_iscsid 00:10:28.240 05:01:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:10:31.577 05:01:31 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:10:31.577 05:01:31 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:10:32.144 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:10:32.144 executing discovery with adding credential to initiator 00:10:32.144 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@37 -- # echo 'executing discovery with adding credential to initiator' 00:10:32.144 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@38 -- # rc=0 00:10:32.144 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@39 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:32.403 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@40 -- # '[' 0 -ne 0 ']' 00:10:32.403 DONE 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@44 -- # echo DONE 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@45 -- # default_initiator_chap_credentials 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:10:32.403 iscsiadm: No matching sessions found 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:10:32.403 05:01:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:10:35.689 05:01:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:10:35.689 05:01:35 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@49 -- # killprocess 77365 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@948 -- # '[' -z 77365 ']' 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@952 -- # kill -0 77365 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # uname 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77365 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:36.638 killing process with pid 77365 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77365' 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@967 -- # kill 77365 00:10:36.638 05:01:36 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@972 -- # wait 77365 00:10:36.910 05:01:36 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@51 -- # iscsitestfini 00:10:36.910 05:01:36 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:36.910 00:10:36.910 real 0m15.231s 00:10:36.910 user 0m15.264s 00:10:36.910 sys 0m0.683s 00:10:36.910 05:01:36 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.910 05:01:36 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:36.910 ************************************ 00:10:36.910 END TEST chap_during_discovery 00:10:36.910 ************************************ 00:10:36.910 05:01:36 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:10:36.910 05:01:36 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@33 -- # run_test chap_mutual_auth /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:10:36.910 05:01:36 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:36.910 05:01:36 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.910 05:01:36 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:36.910 ************************************ 00:10:36.910 START TEST chap_mutual_auth 00:10:36.910 ************************************ 00:10:36.910 05:01:36 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:10:36.910 * Looking for test storage... 00:10:36.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@13 -- # USER=chapo 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@14 -- # MUSER=mchapo 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@15 -- # PASS=123456789123 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@16 -- # MPASS=321978654321 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@19 -- # iscsitestinit 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@21 -- # set_up_iscsi_target 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@142 -- # pid=77638 00:10:36.910 iSCSI target launched. pid: 77638 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 77638' 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@145 -- # waitforlisten 77638 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@829 -- # '[' -z 77638 ']' 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.910 05:01:37 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:37.169 [2024-07-23 05:01:37.172048] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:37.169 [2024-07-23 05:01:37.172153] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77638 ] 00:10:37.428 [2024-07-23 05:01:37.450972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.428 [2024-07-23 05:01:37.515422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.996 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.996 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@862 -- # return 0 00:10:37.996 05:01:38 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:10:37.996 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.996 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:37.996 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.996 05:01:38 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:10:37.996 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.996 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.255 iscsi_tgt is listening. Running tests... 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:38.255 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.256 05:01:38 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:10:38.256 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.256 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:38.256 Malloc0 00:10:38.256 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.256 05:01:38 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:10:38.256 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.256 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:38.256 05:01:38 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.256 05:01:38 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@155 -- # sleep 1 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:39.193 configuring target for authentication 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@24 -- # echo 'configuring target for authentication' 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:10:39.193 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 0 -eq 1 ']' 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@99 -- # rpc_cmd iscsi_target_node_set_auth -g 1 -r iqn.2016-06.io.spdk:disk1 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 0 -eq 1 ']' 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@106 -- # rpc_cmd iscsi_set_discovery_auth -r -g 1 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.453 executing discovery without adding credential to initiator - we expect failure 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:10:39.453 configuring initiator with biderectional authentication 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@28 -- # echo 'configuring initiator with biderectional authentication' 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@29 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:10:39.453 iscsiadm: No matching sessions found 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # true 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:10:39.453 iscsiadm: No records found 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # true 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:10:39.453 05:01:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:10:42.740 05:01:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:10:42.740 05:01:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@116 -- # '[' 1 -eq 1 ']' 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@117 -- # sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@118 -- # sed -i 's/#node.session.auth.username =.*/node.session.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@119 -- # sed -i 's/#node.session.auth.password =.*/node.session.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' 1 -eq 1 ']' 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n 321978654321 ']' 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n mchapo ']' 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@121 -- # sed -i 's/#node.session.auth.username_in =.*/node.session.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@122 -- # sed -i 's/#node.session.auth.password_in =.*/node.session.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@135 -- # restart_iscsid 00:10:43.674 05:01:43 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:10:46.989 05:01:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:10:46.989 05:01:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:10:47.559 executing discovery - target should not be discovered since the -m option was not used 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@30 -- # echo 'executing discovery - target should not be discovered since the -m option was not used' 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@31 -- # rc=0 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:47.559 [2024-07-23 05:01:47.712104] iscsi.c: 982:iscsi_auth_params: *ERROR*: Initiator wants to use mutual CHAP for security, but it's not enabled. 00:10:47.559 [2024-07-23 05:01:47.712162] iscsi.c:1957:iscsi_op_login_rsp_handle_csg_bit: *ERROR*: iscsi_auth_params() failed 00:10:47.559 iscsiadm: Login failed to authenticate with target 00:10:47.559 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:10:47.559 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # rc=24 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@33 -- # '[' 24 -eq 0 ']' 00:10:47.559 configuring target for authentication with the -m option 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@37 -- # echo 'configuring target for authentication with the -m option' 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@38 -- # config_chap_credentials_for_target -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=2 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 2 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 2 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 1 -eq 1 ']' 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@97 -- # rpc_cmd iscsi_target_node_set_auth -g 2 -r -m iqn.2016-06.io.spdk:disk1 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:10:47.559 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 2 00:10:47.560 05:01:47 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.560 05:01:47 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:47.560 05:01:47 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.560 executing discovery: 00:10:47.560 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@39 -- # echo 'executing discovery:' 00:10:47.560 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@40 -- # rc=0 00:10:47.560 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@41 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:47.560 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:10:47.560 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@42 -- # '[' 0 -ne 0 ']' 00:10:47.560 executing login: 00:10:47.560 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@46 -- # echo 'executing login:' 00:10:47.560 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@47 -- # rc=0 00:10:47.560 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@48 -- # iscsiadm -m node -l -p 10.0.0.1:3260 00:10:47.819 Logging in to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:10:47.819 Login to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@49 -- # '[' 0 -ne 0 ']' 00:10:47.819 DONE 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@54 -- # echo DONE 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@55 -- # default_initiator_chap_credentials 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:10:47.819 [2024-07-23 05:01:47.819038] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:47.819 Logging out of session [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:10:47.819 Logout of [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:10:47.819 05:01:47 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:10:51.105 05:01:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:10:51.105 05:01:51 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@57 -- # trap - SIGINT SIGTERM EXIT 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@59 -- # killprocess 77638 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@948 -- # '[' -z 77638 ']' 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@952 -- # kill -0 77638 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # uname 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77638 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:52.042 killing process with pid 77638 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77638' 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@967 -- # kill 77638 00:10:52.042 05:01:52 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@972 -- # wait 77638 00:10:52.300 05:01:52 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@61 -- # iscsitestfini 00:10:52.301 05:01:52 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:52.301 00:10:52.301 real 0m15.501s 00:10:52.301 user 0m15.544s 00:10:52.301 sys 0m0.709s 00:10:52.301 05:01:52 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.301 05:01:52 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:52.301 ************************************ 00:10:52.301 END TEST chap_mutual_auth 00:10:52.301 ************************************ 00:10:52.560 05:01:52 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:10:52.560 05:01:52 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@34 -- # run_test iscsi_tgt_reset /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:10:52.560 05:01:52 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:52.560 05:01:52 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.560 05:01:52 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:52.560 ************************************ 00:10:52.560 START TEST iscsi_tgt_reset 00:10:52.560 ************************************ 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:10:52.560 * Looking for test storage... 00:10:52.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@11 -- # iscsitestinit 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@18 -- # hash sg_reset 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@22 -- # timing_enter start_iscsi_tgt 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@25 -- # pid=77933 00:10:52.560 Process pid: 77933 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@26 -- # echo 'Process pid: 77933' 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@28 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@30 -- # waitforlisten 77933 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@829 -- # '[' -z 77933 ']' 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.560 05:01:52 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:52.560 [2024-07-23 05:01:52.725397] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:10:52.560 [2024-07-23 05:01:52.725506] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77933 ] 00:10:52.819 [2024-07-23 05:01:52.863925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.820 [2024-07-23 05:01:52.934180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@862 -- # return 0 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@31 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@32 -- # rpc_cmd framework_start_init 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.756 iscsi_tgt is listening. Running tests... 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@33 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@35 -- # timing_exit start_iscsi_tgt 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@37 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@38 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@39 -- # rpc_cmd bdev_malloc_create 64 512 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:53.756 Malloc0 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@44 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.756 05:01:53 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@45 -- # sleep 1 00:10:55.132 05:01:54 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@47 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:55.132 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:10:55.132 05:01:54 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@48 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:55.132 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:10:55.132 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:10:55.132 05:01:54 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@49 -- # waitforiscsidevices 1 00:10:55.132 05:01:54 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@116 -- # local num=1 00:10:55.132 05:01:54 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:55.132 05:01:54 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:55.132 05:01:54 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:55.132 05:01:54 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:55.132 [2024-07-23 05:01:55.008002] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # n=1 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@123 -- # return 0 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # iscsiadm -m session -P 3 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # grep 'Attached scsi disk' 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # awk '{print $4}' 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # dev=sda 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@54 -- # fiopid=77995 00:10:55.132 FIO pid: 77995 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@55 -- # echo 'FIO pid: 77995' 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@57 -- # trap 'iscsicleanup; killprocess $pid; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:10:55.132 05:01:55 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 60 00:10:55.132 [global] 00:10:55.132 thread=1 00:10:55.132 invalidate=1 00:10:55.132 rw=read 00:10:55.132 time_based=1 00:10:55.132 runtime=60 00:10:55.132 ioengine=libaio 00:10:55.132 direct=1 00:10:55.132 bs=512 00:10:55.132 iodepth=1 00:10:55.132 norandommap=1 00:10:55.132 numjobs=1 00:10:55.132 00:10:55.132 [job0] 00:10:55.132 filename=/dev/sda 00:10:55.132 queue_depth set to 113 (sda) 00:10:55.132 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:10:55.132 fio-3.35 00:10:55.132 Starting 1 thread 00:10:56.067 05:01:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 77933 00:10:56.067 05:01:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 77995 00:10:56.067 05:01:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:10:56.067 [2024-07-23 05:01:56.024953] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:10:56.067 [2024-07-23 05:01:56.025032] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:10:56.067 05:01:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:10:56.067 [2024-07-23 05:01:56.027542] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:57.001 05:01:57 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 77933 00:10:57.001 05:01:57 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 77995 00:10:57.001 05:01:57 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:10:57.001 05:01:57 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:10:57.935 05:01:58 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 77933 00:10:57.935 05:01:58 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 77995 00:10:57.935 05:01:58 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:10:57.936 [2024-07-23 05:01:58.035387] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:10:57.936 [2024-07-23 05:01:58.035488] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:10:57.936 05:01:58 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:10:57.936 [2024-07-23 05:01:58.036872] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:58.871 05:01:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 77933 00:10:58.871 05:01:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 77995 00:10:58.871 05:01:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:10:58.871 05:01:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:11:00.269 05:02:00 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 77933 00:11:00.269 05:02:00 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 77995 00:11:00.269 05:02:00 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:11:00.269 [2024-07-23 05:02:00.045094] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:11:00.269 [2024-07-23 05:02:00.045221] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:11:00.269 05:02:00 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:11:00.269 [2024-07-23 05:02:00.051767] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:00.835 05:02:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 77933 00:11:00.835 05:02:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 77995 00:11:00.835 05:02:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@70 -- # kill 77995 00:11:00.835 05:02:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # wait 77995 00:11:00.835 05:02:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # true 00:11:00.835 05:02:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@73 -- # trap - SIGINT SIGTERM EXIT 00:11:00.835 05:02:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@75 -- # iscsicleanup 00:11:00.835 Cleaning up iSCSI connection 00:11:00.835 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:11:00.835 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:11:01.094 fio: pid=78027, err=19/file:io_u.c:1889, func=io_u error, error=No such device 00:11:01.094 fio: io_u error on file /dev/sda: No such device: read offset=36195840, buflen=512 00:11:01.094 00:11:01.094 job0: (groupid=0, jobs=1): err=19 (file:io_u.c:1889, func=io_u error, error=No such device): pid=78027: Tue Jul 23 05:02:01 2024 00:11:01.094 read: IOPS=12.3k, BW=6152KiB/s (6299kB/s)(34.5MiB/5746msec) 00:11:01.094 slat (usec): min=3, max=955, avg= 6.55, stdev= 5.11 00:11:01.094 clat (usec): min=42, max=6478, avg=73.59, stdev=33.12 00:11:01.094 lat (usec): min=58, max=6484, avg=80.13, stdev=33.55 00:11:01.094 clat percentiles (usec): 00:11:01.094 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 60], 00:11:01.094 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 74], 00:11:01.094 | 70.00th=[ 79], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 109], 00:11:01.094 | 99.00th=[ 129], 99.50th=[ 139], 99.90th=[ 167], 99.95th=[ 204], 00:11:01.094 | 99.99th=[ 1012] 00:11:01.094 bw ( KiB/s): min= 5925, max= 6510, per=100.00%, avg=6162.36, stdev=176.60, samples=11 00:11:01.094 iops : min=11850, max=13020, avg=12324.73, stdev=353.21, samples=11 00:11:01.094 lat (usec) : 50=0.09%, 100=91.95%, 250=7.92%, 500=0.02%, 750=0.01% 00:11:01.094 lat (usec) : 1000=0.01% 00:11:01.094 lat (msec) : 2=0.01%, 10=0.01% 00:11:01.094 cpu : usr=4.07%, sys=10.20%, ctx=70876, majf=0, minf=1 00:11:01.094 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.094 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.094 issued rwts: total=70696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.094 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.094 00:11:01.094 Run status group 0 (all jobs): 00:11:01.094 READ: bw=6152KiB/s (6299kB/s), 6152KiB/s-6152KiB/s (6299kB/s-6299kB/s), io=34.5MiB (36.2MB), run=5746-5746msec 00:11:01.094 00:11:01.094 Disk stats (read/write): 00:11:01.094 sda: ios=69604/0, merge=0/0, ticks=5032/0, in_queue=5032, util=98.38% 00:11:01.094 Logging out of session [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:11:01.094 Logout of [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@983 -- # rm -rf 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@76 -- # killprocess 77933 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@948 -- # '[' -z 77933 ']' 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@952 -- # kill -0 77933 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # uname 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77933 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:01.094 killing process with pid 77933 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77933' 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@967 -- # kill 77933 00:11:01.094 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@972 -- # wait 77933 00:11:01.661 05:02:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@77 -- # iscsitestfini 00:11:01.661 05:02:01 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:01.661 00:11:01.661 real 0m9.027s 00:11:01.661 user 0m6.211s 00:11:01.661 sys 0m2.354s 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:11:01.662 ************************************ 00:11:01.662 END TEST iscsi_tgt_reset 00:11:01.662 ************************************ 00:11:01.662 05:02:01 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:11:01.662 05:02:01 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@35 -- # run_test iscsi_tgt_rpc_config /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:11:01.662 05:02:01 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:01.662 05:02:01 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.662 05:02:01 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:01.662 ************************************ 00:11:01.662 START TEST iscsi_tgt_rpc_config 00:11:01.662 ************************************ 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:11:01.662 * Looking for test storage... 00:11:01.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@11 -- # iscsitestinit 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@16 -- # rpc_config_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@18 -- # timing_enter start_iscsi_tgt 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@21 -- # pid=78172 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@20 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:11:01.662 Process pid: 78172 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@22 -- # echo 'Process pid: 78172' 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@24 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@26 -- # waitforlisten 78172 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@829 -- # '[' -z 78172 ']' 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.662 05:02:01 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:11:01.662 [2024-07-23 05:02:01.776865] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:11:01.662 [2024-07-23 05:02:01.776965] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78172 ] 00:11:01.921 [2024-07-23 05:02:01.910598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.921 [2024-07-23 05:02:01.977507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.854 05:02:02 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.854 05:02:02 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@862 -- # return 0 00:11:02.854 05:02:02 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@28 -- # rpc_wait_pid=78188 00:11:02.854 05:02:02 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:11:02.854 05:02:02 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:11:02.854 05:02:03 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@32 -- # ps 78188 00:11:02.854 PID TTY STAT TIME COMMAND 00:11:02.854 78188 ? S 0:00 python3 /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:11:02.854 05:02:03 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:03.419 05:02:03 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@35 -- # sleep 1 00:11:04.353 iscsi_tgt is listening. Running tests... 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@36 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@39 -- # NOT ps 78188 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 78188 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 78188 00:11:04.353 PID TTY STAT TIME COMMAND 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@43 -- # rpc_wait_pid=78219 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:11:04.353 05:02:04 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@44 -- # sleep 1 00:11:05.727 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@45 -- # NOT ps 78219 00:11:05.727 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:11:05.727 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 78219 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 78219 00:11:05.728 PID TTY STAT TIME COMMAND 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@47 -- # timing_exit start_iscsi_tgt 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:11:05.728 05:02:05 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@49 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py /home/vagrant/spdk_repo/spdk/scripts/rpc.py 10.0.0.1 10.0.0.2 3260 10.0.0.2/32 spdk_iscsi_ns 00:11:32.324 [2024-07-23 05:02:30.543945] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:33.708 [2024-07-23 05:02:33.691353] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:35.610 verify_log_flag_rpc_methods passed 00:11:35.610 create_malloc_bdevs_rpc_methods passed 00:11:35.610 verify_portal_groups_rpc_methods passed 00:11:35.610 verify_initiator_groups_rpc_method passed. 00:11:35.610 This issue will be fixed later. 00:11:35.610 verify_target_nodes_rpc_methods passed. 00:11:35.610 verify_scsi_devices_rpc_methods passed 00:11:35.610 verify_iscsi_connection_rpc_methods passed 00:11:35.610 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:11:35.610 [ 00:11:35.610 { 00:11:35.610 "name": "Malloc0", 00:11:35.610 "aliases": [ 00:11:35.610 "eb2f2324-e8cc-4128-8c99-d256d3e77d55" 00:11:35.610 ], 00:11:35.610 "product_name": "Malloc disk", 00:11:35.610 "block_size": 512, 00:11:35.610 "num_blocks": 131072, 00:11:35.610 "uuid": "eb2f2324-e8cc-4128-8c99-d256d3e77d55", 00:11:35.610 "assigned_rate_limits": { 00:11:35.610 "rw_ios_per_sec": 0, 00:11:35.610 "rw_mbytes_per_sec": 0, 00:11:35.610 "r_mbytes_per_sec": 0, 00:11:35.610 "w_mbytes_per_sec": 0 00:11:35.610 }, 00:11:35.610 "claimed": false, 00:11:35.610 "zoned": false, 00:11:35.610 "supported_io_types": { 00:11:35.610 "read": true, 00:11:35.610 "write": true, 00:11:35.610 "unmap": true, 00:11:35.610 "flush": true, 00:11:35.610 "reset": true, 00:11:35.610 "nvme_admin": false, 00:11:35.610 "nvme_io": false, 00:11:35.610 "nvme_io_md": false, 00:11:35.610 "write_zeroes": true, 00:11:35.610 "zcopy": true, 00:11:35.610 "get_zone_info": false, 00:11:35.610 "zone_management": false, 00:11:35.610 "zone_append": false, 00:11:35.610 "compare": false, 00:11:35.610 "compare_and_write": false, 00:11:35.610 "abort": true, 00:11:35.610 "seek_hole": false, 00:11:35.610 "seek_data": false, 00:11:35.610 "copy": true, 00:11:35.610 "nvme_iov_md": false 00:11:35.610 }, 00:11:35.610 "memory_domains": [ 00:11:35.610 { 00:11:35.610 "dma_device_id": "system", 00:11:35.610 "dma_device_type": 1 00:11:35.610 }, 00:11:35.610 { 00:11:35.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.610 "dma_device_type": 2 00:11:35.610 } 00:11:35.610 ], 00:11:35.610 "driver_specific": {} 00:11:35.610 }, 00:11:35.610 { 00:11:35.610 "name": "Malloc1", 00:11:35.610 "aliases": [ 00:11:35.610 "4fb93115-4782-494c-9244-7a33857cae44" 00:11:35.610 ], 00:11:35.610 "product_name": "Malloc disk", 00:11:35.610 "block_size": 512, 00:11:35.610 "num_blocks": 131072, 00:11:35.610 "uuid": "4fb93115-4782-494c-9244-7a33857cae44", 00:11:35.610 "assigned_rate_limits": { 00:11:35.610 "rw_ios_per_sec": 0, 00:11:35.610 "rw_mbytes_per_sec": 0, 00:11:35.610 "r_mbytes_per_sec": 0, 00:11:35.610 "w_mbytes_per_sec": 0 00:11:35.610 }, 00:11:35.610 "claimed": false, 00:11:35.610 "zoned": false, 00:11:35.610 "supported_io_types": { 00:11:35.610 "read": true, 00:11:35.611 "write": true, 00:11:35.611 "unmap": true, 00:11:35.611 "flush": true, 00:11:35.611 "reset": true, 00:11:35.611 "nvme_admin": false, 00:11:35.611 "nvme_io": false, 00:11:35.611 "nvme_io_md": false, 00:11:35.611 "write_zeroes": true, 00:11:35.611 "zcopy": true, 00:11:35.611 "get_zone_info": false, 00:11:35.611 "zone_management": false, 00:11:35.611 "zone_append": false, 00:11:35.611 "compare": false, 00:11:35.611 "compare_and_write": false, 00:11:35.611 "abort": true, 00:11:35.611 "seek_hole": false, 00:11:35.611 "seek_data": false, 00:11:35.611 "copy": true, 00:11:35.611 "nvme_iov_md": false 00:11:35.611 }, 00:11:35.611 "memory_domains": [ 00:11:35.611 { 00:11:35.611 "dma_device_id": "system", 00:11:35.611 "dma_device_type": 1 00:11:35.611 }, 00:11:35.611 { 00:11:35.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.611 "dma_device_type": 2 00:11:35.611 } 00:11:35.611 ], 00:11:35.611 "driver_specific": {} 00:11:35.611 }, 00:11:35.611 { 00:11:35.611 "name": "Malloc2", 00:11:35.611 "aliases": [ 00:11:35.611 "06d64d8d-b593-40db-9515-d41469090136" 00:11:35.611 ], 00:11:35.611 "product_name": "Malloc disk", 00:11:35.611 "block_size": 512, 00:11:35.611 "num_blocks": 131072, 00:11:35.611 "uuid": "06d64d8d-b593-40db-9515-d41469090136", 00:11:35.611 "assigned_rate_limits": { 00:11:35.611 "rw_ios_per_sec": 0, 00:11:35.611 "rw_mbytes_per_sec": 0, 00:11:35.611 "r_mbytes_per_sec": 0, 00:11:35.611 "w_mbytes_per_sec": 0 00:11:35.611 }, 00:11:35.611 "claimed": false, 00:11:35.611 "zoned": false, 00:11:35.611 "supported_io_types": { 00:11:35.611 "read": true, 00:11:35.611 "write": true, 00:11:35.611 "unmap": true, 00:11:35.611 "flush": true, 00:11:35.611 "reset": true, 00:11:35.611 "nvme_admin": false, 00:11:35.611 "nvme_io": false, 00:11:35.611 "nvme_io_md": false, 00:11:35.611 "write_zeroes": true, 00:11:35.611 "zcopy": true, 00:11:35.611 "get_zone_info": false, 00:11:35.611 "zone_management": false, 00:11:35.611 "zone_append": false, 00:11:35.611 "compare": false, 00:11:35.611 "compare_and_write": false, 00:11:35.611 "abort": true, 00:11:35.611 "seek_hole": false, 00:11:35.611 "seek_data": false, 00:11:35.611 "copy": true, 00:11:35.611 "nvme_iov_md": false 00:11:35.611 }, 00:11:35.611 "memory_domains": [ 00:11:35.611 { 00:11:35.611 "dma_device_id": "system", 00:11:35.611 "dma_device_type": 1 00:11:35.611 }, 00:11:35.611 { 00:11:35.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.611 "dma_device_type": 2 00:11:35.611 } 00:11:35.611 ], 00:11:35.611 "driver_specific": {} 00:11:35.611 }, 00:11:35.611 { 00:11:35.611 "name": "Malloc3", 00:11:35.611 "aliases": [ 00:11:35.611 "b54dd65b-042e-47e0-9f41-34ad754b14fa" 00:11:35.611 ], 00:11:35.611 "product_name": "Malloc disk", 00:11:35.611 "block_size": 512, 00:11:35.611 "num_blocks": 131072, 00:11:35.611 "uuid": "b54dd65b-042e-47e0-9f41-34ad754b14fa", 00:11:35.611 "assigned_rate_limits": { 00:11:35.611 "rw_ios_per_sec": 0, 00:11:35.611 "rw_mbytes_per_sec": 0, 00:11:35.611 "r_mbytes_per_sec": 0, 00:11:35.611 "w_mbytes_per_sec": 0 00:11:35.611 }, 00:11:35.611 "claimed": false, 00:11:35.611 "zoned": false, 00:11:35.611 "supported_io_types": { 00:11:35.611 "read": true, 00:11:35.611 "write": true, 00:11:35.611 "unmap": true, 00:11:35.611 "flush": true, 00:11:35.611 "reset": true, 00:11:35.611 "nvme_admin": false, 00:11:35.611 "nvme_io": false, 00:11:35.611 "nvme_io_md": false, 00:11:35.611 "write_zeroes": true, 00:11:35.611 "zcopy": true, 00:11:35.611 "get_zone_info": false, 00:11:35.611 "zone_management": false, 00:11:35.611 "zone_append": false, 00:11:35.611 "compare": false, 00:11:35.611 "compare_and_write": false, 00:11:35.611 "abort": true, 00:11:35.611 "seek_hole": false, 00:11:35.611 "seek_data": false, 00:11:35.611 "copy": true, 00:11:35.611 "nvme_iov_md": false 00:11:35.611 }, 00:11:35.611 "memory_domains": [ 00:11:35.611 { 00:11:35.611 "dma_device_id": "system", 00:11:35.611 "dma_device_type": 1 00:11:35.611 }, 00:11:35.611 { 00:11:35.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.611 "dma_device_type": 2 00:11:35.611 } 00:11:35.611 ], 00:11:35.611 "driver_specific": {} 00:11:35.611 }, 00:11:35.611 { 00:11:35.611 "name": "Malloc4", 00:11:35.611 "aliases": [ 00:11:35.611 "cfe05ec5-b335-40bd-9a93-3f7a5ff7f31a" 00:11:35.611 ], 00:11:35.611 "product_name": "Malloc disk", 00:11:35.611 "block_size": 512, 00:11:35.611 "num_blocks": 131072, 00:11:35.611 "uuid": "cfe05ec5-b335-40bd-9a93-3f7a5ff7f31a", 00:11:35.611 "assigned_rate_limits": { 00:11:35.611 "rw_ios_per_sec": 0, 00:11:35.611 "rw_mbytes_per_sec": 0, 00:11:35.611 "r_mbytes_per_sec": 0, 00:11:35.611 "w_mbytes_per_sec": 0 00:11:35.611 }, 00:11:35.611 "claimed": false, 00:11:35.611 "zoned": false, 00:11:35.611 "supported_io_types": { 00:11:35.611 "read": true, 00:11:35.611 "write": true, 00:11:35.611 "unmap": true, 00:11:35.611 "flush": true, 00:11:35.611 "reset": true, 00:11:35.611 "nvme_admin": false, 00:11:35.611 "nvme_io": false, 00:11:35.611 "nvme_io_md": false, 00:11:35.611 "write_zeroes": true, 00:11:35.611 "zcopy": true, 00:11:35.611 "get_zone_info": false, 00:11:35.611 "zone_management": false, 00:11:35.611 "zone_append": false, 00:11:35.611 "compare": false, 00:11:35.611 "compare_and_write": false, 00:11:35.611 "abort": true, 00:11:35.611 "seek_hole": false, 00:11:35.611 "seek_data": false, 00:11:35.611 "copy": true, 00:11:35.611 "nvme_iov_md": false 00:11:35.611 }, 00:11:35.611 "memory_domains": [ 00:11:35.611 { 00:11:35.611 "dma_device_id": "system", 00:11:35.611 "dma_device_type": 1 00:11:35.611 }, 00:11:35.611 { 00:11:35.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.611 "dma_device_type": 2 00:11:35.611 } 00:11:35.611 ], 00:11:35.611 "driver_specific": {} 00:11:35.611 }, 00:11:35.611 { 00:11:35.611 "name": "Malloc5", 00:11:35.611 "aliases": [ 00:11:35.611 "f9ed73a9-6eb9-4ce1-946b-3fe7c99f3860" 00:11:35.612 ], 00:11:35.612 "product_name": "Malloc disk", 00:11:35.612 "block_size": 512, 00:11:35.612 "num_blocks": 131072, 00:11:35.612 "uuid": "f9ed73a9-6eb9-4ce1-946b-3fe7c99f3860", 00:11:35.612 "assigned_rate_limits": { 00:11:35.612 "rw_ios_per_sec": 0, 00:11:35.612 "rw_mbytes_per_sec": 0, 00:11:35.612 "r_mbytes_per_sec": 0, 00:11:35.612 "w_mbytes_per_sec": 0 00:11:35.612 }, 00:11:35.612 "claimed": false, 00:11:35.612 "zoned": false, 00:11:35.612 "supported_io_types": { 00:11:35.612 "read": true, 00:11:35.612 "write": true, 00:11:35.612 "unmap": true, 00:11:35.612 "flush": true, 00:11:35.612 "reset": true, 00:11:35.612 "nvme_admin": false, 00:11:35.612 "nvme_io": false, 00:11:35.612 "nvme_io_md": false, 00:11:35.612 "write_zeroes": true, 00:11:35.612 "zcopy": true, 00:11:35.612 "get_zone_info": false, 00:11:35.612 "zone_management": false, 00:11:35.612 "zone_append": false, 00:11:35.612 "compare": false, 00:11:35.612 "compare_and_write": false, 00:11:35.612 "abort": true, 00:11:35.612 "seek_hole": false, 00:11:35.612 "seek_data": false, 00:11:35.612 "copy": true, 00:11:35.612 "nvme_iov_md": false 00:11:35.612 }, 00:11:35.612 "memory_domains": [ 00:11:35.612 { 00:11:35.612 "dma_device_id": "system", 00:11:35.612 "dma_device_type": 1 00:11:35.612 }, 00:11:35.612 { 00:11:35.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.612 "dma_device_type": 2 00:11:35.612 } 00:11:35.612 ], 00:11:35.612 "driver_specific": {} 00:11:35.612 } 00:11:35.612 ] 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@55 -- # iscsicleanup 00:11:35.612 Cleaning up iSCSI connection 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:11:35.612 iscsiadm: No matching sessions found 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # true 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:11:35.612 iscsiadm: No records found 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # true 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # rm -rf 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@56 -- # killprocess 78172 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@948 -- # '[' -z 78172 ']' 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@952 -- # kill -0 78172 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # uname 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78172 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:35.612 killing process with pid 78172 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78172' 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@967 -- # kill 78172 00:11:35.612 05:02:35 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@972 -- # wait 78172 00:11:36.178 05:02:36 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@58 -- # iscsitestfini 00:11:36.178 05:02:36 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:36.178 00:11:36.178 real 0m34.635s 00:11:36.178 user 1m0.174s 00:11:36.178 sys 0m4.680s 00:11:36.178 05:02:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.178 05:02:36 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:11:36.178 ************************************ 00:11:36.178 END TEST iscsi_tgt_rpc_config 00:11:36.178 ************************************ 00:11:36.178 05:02:36 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:11:36.178 05:02:36 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@36 -- # run_test iscsi_tgt_iscsi_lvol /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:11:36.178 05:02:36 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:36.178 05:02:36 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.178 05:02:36 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:36.178 ************************************ 00:11:36.178 START TEST iscsi_tgt_iscsi_lvol 00:11:36.178 ************************************ 00:11:36.178 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:11:36.178 * Looking for test storage... 00:11:36.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@11 -- # iscsitestinit 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@13 -- # MALLOC_BDEV_SIZE=128 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@15 -- # '[' 1 -eq 1 ']' 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@16 -- # NUM_LVS=10 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@17 -- # NUM_LVOL=10 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@23 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@24 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@26 -- # timing_enter start_iscsi_tgt 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@29 -- # pid=78785 00:11:36.437 Process pid: 78785 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@30 -- # echo 'Process pid: 78785' 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@28 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@32 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@34 -- # waitforlisten 78785 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@829 -- # '[' -z 78785 ']' 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.437 05:02:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:36.437 [2024-07-23 05:02:36.488293] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:11:36.437 [2024-07-23 05:02:36.488408] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78785 ] 00:11:36.437 [2024-07-23 05:02:36.628332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.695 [2024-07-23 05:02:36.728422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.695 [2024-07-23 05:02:36.728588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.695 [2024-07-23 05:02:36.728863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.695 [2024-07-23 05:02:36.728715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.262 05:02:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.262 05:02:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:37.262 05:02:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:11:37.520 05:02:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:38.130 iscsi_tgt is listening. Running tests... 00:11:38.130 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@37 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:38.130 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@39 -- # timing_exit start_iscsi_tgt 00:11:38.130 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:38.130 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:38.130 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@41 -- # timing_enter setup 00:11:38.130 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:38.130 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:38.130 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:11:38.389 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # seq 1 10 00:11:38.389 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:38.389 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=3 00:11:38.389 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 3 ANY 10.0.0.2/32 00:11:38.646 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 1 -eq 1 ']' 00:11:38.646 05:02:38 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:38.904 05:02:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # malloc_bdevs='Malloc0 ' 00:11:38.904 05:02:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:39.162 05:02:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # malloc_bdevs+=Malloc1 00:11:39.162 05:02:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:39.420 05:02:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@53 -- # bdev=raid0 00:11:39.420 05:02:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs_1 -c 1048576 00:11:39.679 05:02:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=79b9a281-8a58-4597-b2a7-642a5759c1db 00:11:39.679 05:02:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:39.679 05:02:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:39.679 05:02:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:39.679 05:02:39 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 79b9a281-8a58-4597-b2a7-642a5759c1db lbd_1 10 00:11:39.937 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=fad11e40-7837-4420-9a6f-ecaa913af092 00:11:39.937 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='fad11e40-7837-4420-9a6f-ecaa913af092:0 ' 00:11:39.937 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:39.937 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 79b9a281-8a58-4597-b2a7-642a5759c1db lbd_2 10 00:11:40.196 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f5a78e4d-cde3-45fc-b2cd-1f4a3ef79f9a 00:11:40.196 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f5a78e4d-cde3-45fc-b2cd-1f4a3ef79f9a:1 ' 00:11:40.196 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:40.196 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 79b9a281-8a58-4597-b2a7-642a5759c1db lbd_3 10 00:11:40.455 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a69af5f3-b3a1-4677-b979-4b64ae8f6116 00:11:40.455 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a69af5f3-b3a1-4677-b979-4b64ae8f6116:2 ' 00:11:40.455 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:40.455 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 79b9a281-8a58-4597-b2a7-642a5759c1db lbd_4 10 00:11:40.713 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1a8627cf-9e88-49fd-8aa2-bf23c4c899be 00:11:40.713 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1a8627cf-9e88-49fd-8aa2-bf23c4c899be:3 ' 00:11:40.713 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:40.713 05:02:40 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 79b9a281-8a58-4597-b2a7-642a5759c1db lbd_5 10 00:11:40.972 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=16de5fb0-5584-4aee-a41b-697071ca244a 00:11:40.972 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='16de5fb0-5584-4aee-a41b-697071ca244a:4 ' 00:11:40.972 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:40.972 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 79b9a281-8a58-4597-b2a7-642a5759c1db lbd_6 10 00:11:41.230 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d57e5e1a-e9c9-4058-8170-4ec9bfb512b5 00:11:41.230 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d57e5e1a-e9c9-4058-8170-4ec9bfb512b5:5 ' 00:11:41.230 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:41.230 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 79b9a281-8a58-4597-b2a7-642a5759c1db lbd_7 10 00:11:41.489 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=db10845c-1dda-4592-811f-a925f22a9e19 00:11:41.489 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='db10845c-1dda-4592-811f-a925f22a9e19:6 ' 00:11:41.490 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:41.490 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 79b9a281-8a58-4597-b2a7-642a5759c1db lbd_8 10 00:11:41.749 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3eaa4658-7cd4-469e-8dfe-b02de2041280 00:11:41.749 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3eaa4658-7cd4-469e-8dfe-b02de2041280:7 ' 00:11:41.749 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:41.749 05:02:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 79b9a281-8a58-4597-b2a7-642a5759c1db lbd_9 10 00:11:42.008 05:02:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5a4cf37f-a128-4dbd-976f-8d105f5d0ef8 00:11:42.008 05:02:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5a4cf37f-a128-4dbd-976f-8d105f5d0ef8:8 ' 00:11:42.008 05:02:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:42.008 05:02:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 79b9a281-8a58-4597-b2a7-642a5759c1db lbd_10 10 00:11:42.267 05:02:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5bda3295-a9ae-43fc-8f05-cfe0117949ca 00:11:42.267 05:02:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5bda3295-a9ae-43fc-8f05-cfe0117949ca:9 ' 00:11:42.267 05:02:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias 'fad11e40-7837-4420-9a6f-ecaa913af092:0 f5a78e4d-cde3-45fc-b2cd-1f4a3ef79f9a:1 a69af5f3-b3a1-4677-b979-4b64ae8f6116:2 1a8627cf-9e88-49fd-8aa2-bf23c4c899be:3 16de5fb0-5584-4aee-a41b-697071ca244a:4 d57e5e1a-e9c9-4058-8170-4ec9bfb512b5:5 db10845c-1dda-4592-811f-a925f22a9e19:6 3eaa4658-7cd4-469e-8dfe-b02de2041280:7 5a4cf37f-a128-4dbd-976f-8d105f5d0ef8:8 5bda3295-a9ae-43fc-8f05-cfe0117949ca:9 ' 1:3 256 -d 00:11:42.526 05:02:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:42.526 05:02:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=4 00:11:42.526 05:02:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 4 ANY 10.0.0.2/32 00:11:42.785 05:02:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 2 -eq 1 ']' 00:11:42.785 05:02:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:43.044 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc2 00:11:43.044 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc2 lvs_2 -c 1048576 00:11:43.302 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=e0531b31-571c-471b-b53b-2a5f355dfdca 00:11:43.302 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:43.302 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:43.302 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:43.302 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0531b31-571c-471b-b53b-2a5f355dfdca lbd_1 10 00:11:43.600 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ddec6837-59a5-4c24-895f-94272c680622 00:11:43.600 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ddec6837-59a5-4c24-895f-94272c680622:0 ' 00:11:43.600 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:43.600 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0531b31-571c-471b-b53b-2a5f355dfdca lbd_2 10 00:11:43.867 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7047c355-73ba-496d-8956-5788d6637d4a 00:11:43.867 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7047c355-73ba-496d-8956-5788d6637d4a:1 ' 00:11:43.867 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:43.867 05:02:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0531b31-571c-471b-b53b-2a5f355dfdca lbd_3 10 00:11:43.867 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2891ef8a-fa0e-44a7-9429-276082c28107 00:11:43.867 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2891ef8a-fa0e-44a7-9429-276082c28107:2 ' 00:11:43.867 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:43.867 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0531b31-571c-471b-b53b-2a5f355dfdca lbd_4 10 00:11:44.126 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ad8c28c4-0c19-421f-b746-315f4eacb04e 00:11:44.126 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ad8c28c4-0c19-421f-b746-315f4eacb04e:3 ' 00:11:44.127 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:44.127 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0531b31-571c-471b-b53b-2a5f355dfdca lbd_5 10 00:11:44.386 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=79da88d5-8540-44d8-b5f4-d3f8ab7f14cc 00:11:44.386 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='79da88d5-8540-44d8-b5f4-d3f8ab7f14cc:4 ' 00:11:44.386 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:44.386 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0531b31-571c-471b-b53b-2a5f355dfdca lbd_6 10 00:11:44.645 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8c40b73d-acbf-43fa-a101-4aeae0afd188 00:11:44.645 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8c40b73d-acbf-43fa-a101-4aeae0afd188:5 ' 00:11:44.645 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:44.645 05:02:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0531b31-571c-471b-b53b-2a5f355dfdca lbd_7 10 00:11:44.904 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7cf5dbc3-e7ac-4dd0-870b-d6090e99bf8b 00:11:44.904 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7cf5dbc3-e7ac-4dd0-870b-d6090e99bf8b:6 ' 00:11:44.904 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:44.904 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0531b31-571c-471b-b53b-2a5f355dfdca lbd_8 10 00:11:45.163 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6d6595d1-327e-4366-8cb0-de38d93a78b1 00:11:45.163 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6d6595d1-327e-4366-8cb0-de38d93a78b1:7 ' 00:11:45.163 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:45.163 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0531b31-571c-471b-b53b-2a5f355dfdca lbd_9 10 00:11:45.422 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=29c21462-f25c-4de7-956a-1753d4f18dad 00:11:45.422 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='29c21462-f25c-4de7-956a-1753d4f18dad:8 ' 00:11:45.422 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:45.422 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0531b31-571c-471b-b53b-2a5f355dfdca lbd_10 10 00:11:45.989 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d469ec9b-7798-4681-91d4-aca2f1faa710 00:11:45.989 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d469ec9b-7798-4681-91d4-aca2f1faa710:9 ' 00:11:45.989 05:02:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias 'ddec6837-59a5-4c24-895f-94272c680622:0 7047c355-73ba-496d-8956-5788d6637d4a:1 2891ef8a-fa0e-44a7-9429-276082c28107:2 ad8c28c4-0c19-421f-b746-315f4eacb04e:3 79da88d5-8540-44d8-b5f4-d3f8ab7f14cc:4 8c40b73d-acbf-43fa-a101-4aeae0afd188:5 7cf5dbc3-e7ac-4dd0-870b-d6090e99bf8b:6 6d6595d1-327e-4366-8cb0-de38d93a78b1:7 29c21462-f25c-4de7-956a-1753d4f18dad:8 d469ec9b-7798-4681-91d4-aca2f1faa710:9 ' 1:4 256 -d 00:11:45.989 05:02:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:45.989 05:02:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=5 00:11:45.989 05:02:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 5 ANY 10.0.0.2/32 00:11:46.247 05:02:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 3 -eq 1 ']' 00:11:46.247 05:02:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:46.815 05:02:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc3 00:11:46.815 05:02:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc3 lvs_3 -c 1048576 00:11:47.074 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=e5c923a1-dca0-4e6d-aa90-f388f0224930 00:11:47.074 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:47.074 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:47.074 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:47.074 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e5c923a1-dca0-4e6d-aa90-f388f0224930 lbd_1 10 00:11:47.332 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=abcd6468-3a5b-456d-b031-1c9f0739ea17 00:11:47.332 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='abcd6468-3a5b-456d-b031-1c9f0739ea17:0 ' 00:11:47.332 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:47.332 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e5c923a1-dca0-4e6d-aa90-f388f0224930 lbd_2 10 00:11:47.591 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=24c07608-0240-42c0-8c25-1f0382b2967d 00:11:47.591 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='24c07608-0240-42c0-8c25-1f0382b2967d:1 ' 00:11:47.591 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:47.591 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e5c923a1-dca0-4e6d-aa90-f388f0224930 lbd_3 10 00:11:47.849 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=96f0ecfc-4266-4a61-b0a5-672bf28a8700 00:11:47.849 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='96f0ecfc-4266-4a61-b0a5-672bf28a8700:2 ' 00:11:47.849 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:47.849 05:02:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e5c923a1-dca0-4e6d-aa90-f388f0224930 lbd_4 10 00:11:47.849 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=53c3911f-8fc9-4eff-b826-8e508e3c0667 00:11:47.849 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='53c3911f-8fc9-4eff-b826-8e508e3c0667:3 ' 00:11:47.849 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:47.849 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e5c923a1-dca0-4e6d-aa90-f388f0224930 lbd_5 10 00:11:48.109 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9d39c2e6-ab20-414b-b04e-cc614c4d9d20 00:11:48.109 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9d39c2e6-ab20-414b-b04e-cc614c4d9d20:4 ' 00:11:48.109 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:48.109 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e5c923a1-dca0-4e6d-aa90-f388f0224930 lbd_6 10 00:11:48.368 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=061e4d82-1975-4561-9332-81e8c1e8015a 00:11:48.368 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='061e4d82-1975-4561-9332-81e8c1e8015a:5 ' 00:11:48.368 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:48.368 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e5c923a1-dca0-4e6d-aa90-f388f0224930 lbd_7 10 00:11:48.627 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=33a1cc41-5d4d-47dd-93f1-5b08faa69e38 00:11:48.627 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='33a1cc41-5d4d-47dd-93f1-5b08faa69e38:6 ' 00:11:48.627 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:48.627 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e5c923a1-dca0-4e6d-aa90-f388f0224930 lbd_8 10 00:11:48.886 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9da9ba42-15bb-4a6c-bc63-648762e3c677 00:11:48.886 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9da9ba42-15bb-4a6c-bc63-648762e3c677:7 ' 00:11:48.886 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:48.886 05:02:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e5c923a1-dca0-4e6d-aa90-f388f0224930 lbd_9 10 00:11:49.159 05:02:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f7889af8-4339-4b5b-9c54-c2a01baf4b74 00:11:49.159 05:02:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f7889af8-4339-4b5b-9c54-c2a01baf4b74:8 ' 00:11:49.159 05:02:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:49.159 05:02:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e5c923a1-dca0-4e6d-aa90-f388f0224930 lbd_10 10 00:11:49.432 05:02:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6e693182-be7e-4a6e-8ee9-3c12bfb510ef 00:11:49.432 05:02:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6e693182-be7e-4a6e-8ee9-3c12bfb510ef:9 ' 00:11:49.432 05:02:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias 'abcd6468-3a5b-456d-b031-1c9f0739ea17:0 24c07608-0240-42c0-8c25-1f0382b2967d:1 96f0ecfc-4266-4a61-b0a5-672bf28a8700:2 53c3911f-8fc9-4eff-b826-8e508e3c0667:3 9d39c2e6-ab20-414b-b04e-cc614c4d9d20:4 061e4d82-1975-4561-9332-81e8c1e8015a:5 33a1cc41-5d4d-47dd-93f1-5b08faa69e38:6 9da9ba42-15bb-4a6c-bc63-648762e3c677:7 f7889af8-4339-4b5b-9c54-c2a01baf4b74:8 6e693182-be7e-4a6e-8ee9-3c12bfb510ef:9 ' 1:5 256 -d 00:11:49.691 05:02:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:49.691 05:02:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=6 00:11:49.691 05:02:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 6 ANY 10.0.0.2/32 00:11:49.950 05:02:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 4 -eq 1 ']' 00:11:49.950 05:02:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:50.208 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc4 00:11:50.208 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc4 lvs_4 -c 1048576 00:11:50.466 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=b6c92fc5-d5c5-4a6d-8614-561dc1c94e07 00:11:50.466 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:50.466 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:50.466 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:50.466 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b6c92fc5-d5c5-4a6d-8614-561dc1c94e07 lbd_1 10 00:11:50.724 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=103b331e-5014-4d37-9421-5a8e5b2dd931 00:11:50.724 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='103b331e-5014-4d37-9421-5a8e5b2dd931:0 ' 00:11:50.724 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:50.724 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b6c92fc5-d5c5-4a6d-8614-561dc1c94e07 lbd_2 10 00:11:50.982 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0cc0867f-d05f-4507-b2e2-cf0ab928f291 00:11:50.982 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0cc0867f-d05f-4507-b2e2-cf0ab928f291:1 ' 00:11:50.982 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:50.982 05:02:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b6c92fc5-d5c5-4a6d-8614-561dc1c94e07 lbd_3 10 00:11:51.243 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=783a7399-3da3-4a87-b16d-caba3c0161cf 00:11:51.243 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='783a7399-3da3-4a87-b16d-caba3c0161cf:2 ' 00:11:51.243 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:51.243 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b6c92fc5-d5c5-4a6d-8614-561dc1c94e07 lbd_4 10 00:11:51.243 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=519b5d72-353e-42f9-b918-97e8ce9ceeb7 00:11:51.243 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='519b5d72-353e-42f9-b918-97e8ce9ceeb7:3 ' 00:11:51.243 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:51.243 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b6c92fc5-d5c5-4a6d-8614-561dc1c94e07 lbd_5 10 00:11:51.501 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=efc73287-59e4-46db-becc-4aceb42be486 00:11:51.501 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='efc73287-59e4-46db-becc-4aceb42be486:4 ' 00:11:51.501 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:51.501 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b6c92fc5-d5c5-4a6d-8614-561dc1c94e07 lbd_6 10 00:11:51.759 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2109377a-c41f-4e16-b1bd-c8000fdb35a1 00:11:51.759 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2109377a-c41f-4e16-b1bd-c8000fdb35a1:5 ' 00:11:51.759 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:51.759 05:02:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b6c92fc5-d5c5-4a6d-8614-561dc1c94e07 lbd_7 10 00:11:52.017 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=de673584-9eb3-4dc0-a6bd-57647b7548cd 00:11:52.017 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='de673584-9eb3-4dc0-a6bd-57647b7548cd:6 ' 00:11:52.017 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:52.017 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b6c92fc5-d5c5-4a6d-8614-561dc1c94e07 lbd_8 10 00:11:52.275 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b4be23e2-ed5f-4bf5-8dc0-5aa5abeae430 00:11:52.275 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b4be23e2-ed5f-4bf5-8dc0-5aa5abeae430:7 ' 00:11:52.275 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:52.275 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b6c92fc5-d5c5-4a6d-8614-561dc1c94e07 lbd_9 10 00:11:52.534 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2c9e5fe5-dfdc-4686-9a4f-ccc89e873812 00:11:52.534 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2c9e5fe5-dfdc-4686-9a4f-ccc89e873812:8 ' 00:11:52.534 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:52.534 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b6c92fc5-d5c5-4a6d-8614-561dc1c94e07 lbd_10 10 00:11:52.793 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2137f123-4db2-416d-837c-d148ecb28162 00:11:52.793 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2137f123-4db2-416d-837c-d148ecb28162:9 ' 00:11:52.794 05:02:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias '103b331e-5014-4d37-9421-5a8e5b2dd931:0 0cc0867f-d05f-4507-b2e2-cf0ab928f291:1 783a7399-3da3-4a87-b16d-caba3c0161cf:2 519b5d72-353e-42f9-b918-97e8ce9ceeb7:3 efc73287-59e4-46db-becc-4aceb42be486:4 2109377a-c41f-4e16-b1bd-c8000fdb35a1:5 de673584-9eb3-4dc0-a6bd-57647b7548cd:6 b4be23e2-ed5f-4bf5-8dc0-5aa5abeae430:7 2c9e5fe5-dfdc-4686-9a4f-ccc89e873812:8 2137f123-4db2-416d-837c-d148ecb28162:9 ' 1:6 256 -d 00:11:53.052 05:02:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:53.052 05:02:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=7 00:11:53.052 05:02:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 7 ANY 10.0.0.2/32 00:11:53.310 05:02:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 5 -eq 1 ']' 00:11:53.310 05:02:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:53.569 05:02:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc5 00:11:53.569 05:02:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc5 lvs_5 -c 1048576 00:11:53.828 05:02:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=4735da02-962f-404d-92c8-e12866cc3843 00:11:53.828 05:02:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:53.828 05:02:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:53.828 05:02:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:53.828 05:02:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4735da02-962f-404d-92c8-e12866cc3843 lbd_1 10 00:11:54.086 05:02:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5b4c1ab9-41c2-4860-8949-d77a86ff0b29 00:11:54.086 05:02:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5b4c1ab9-41c2-4860-8949-d77a86ff0b29:0 ' 00:11:54.086 05:02:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:54.086 05:02:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4735da02-962f-404d-92c8-e12866cc3843 lbd_2 10 00:11:54.344 05:02:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c01c237d-48ed-4ca6-bb1d-233b36c36ab8 00:11:54.344 05:02:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c01c237d-48ed-4ca6-bb1d-233b36c36ab8:1 ' 00:11:54.344 05:02:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:54.344 05:02:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4735da02-962f-404d-92c8-e12866cc3843 lbd_3 10 00:11:54.612 05:02:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ca1cdeb9-5ad7-4ee3-9b30-19f510896ee9 00:11:54.612 05:02:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ca1cdeb9-5ad7-4ee3-9b30-19f510896ee9:2 ' 00:11:54.612 05:02:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:54.612 05:02:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4735da02-962f-404d-92c8-e12866cc3843 lbd_4 10 00:11:54.887 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b626dca1-c43a-468f-829d-4be7e958ef04 00:11:54.887 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b626dca1-c43a-468f-829d-4be7e958ef04:3 ' 00:11:54.887 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:54.887 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4735da02-962f-404d-92c8-e12866cc3843 lbd_5 10 00:11:55.146 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=92d3b67c-c643-4d02-b0b3-eab4c738a85e 00:11:55.146 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='92d3b67c-c643-4d02-b0b3-eab4c738a85e:4 ' 00:11:55.146 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:55.146 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4735da02-962f-404d-92c8-e12866cc3843 lbd_6 10 00:11:55.405 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=49bfcf35-44e6-49f4-81d2-533472246e63 00:11:55.405 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='49bfcf35-44e6-49f4-81d2-533472246e63:5 ' 00:11:55.405 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:55.405 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4735da02-962f-404d-92c8-e12866cc3843 lbd_7 10 00:11:55.663 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=196128f9-0553-4e65-88d0-b9c9595e286f 00:11:55.663 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='196128f9-0553-4e65-88d0-b9c9595e286f:6 ' 00:11:55.663 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:55.663 05:02:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4735da02-962f-404d-92c8-e12866cc3843 lbd_8 10 00:11:55.921 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=07430f70-c85c-4232-8d81-20f9bc306203 00:11:55.921 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='07430f70-c85c-4232-8d81-20f9bc306203:7 ' 00:11:55.921 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:55.921 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4735da02-962f-404d-92c8-e12866cc3843 lbd_9 10 00:11:56.180 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=32e0ad31-e448-4141-9b35-c08deedc08e6 00:11:56.180 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='32e0ad31-e448-4141-9b35-c08deedc08e6:8 ' 00:11:56.180 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:56.180 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4735da02-962f-404d-92c8-e12866cc3843 lbd_10 10 00:11:56.437 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2a77fee3-6e61-4d98-bad9-a42be6aacb2c 00:11:56.437 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2a77fee3-6e61-4d98-bad9-a42be6aacb2c:9 ' 00:11:56.437 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias '5b4c1ab9-41c2-4860-8949-d77a86ff0b29:0 c01c237d-48ed-4ca6-bb1d-233b36c36ab8:1 ca1cdeb9-5ad7-4ee3-9b30-19f510896ee9:2 b626dca1-c43a-468f-829d-4be7e958ef04:3 92d3b67c-c643-4d02-b0b3-eab4c738a85e:4 49bfcf35-44e6-49f4-81d2-533472246e63:5 196128f9-0553-4e65-88d0-b9c9595e286f:6 07430f70-c85c-4232-8d81-20f9bc306203:7 32e0ad31-e448-4141-9b35-c08deedc08e6:8 2a77fee3-6e61-4d98-bad9-a42be6aacb2c:9 ' 1:7 256 -d 00:11:56.694 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:56.694 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=8 00:11:56.694 05:02:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 8 ANY 10.0.0.2/32 00:11:56.952 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 6 -eq 1 ']' 00:11:56.952 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:57.210 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc6 00:11:57.210 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc6 lvs_6 -c 1048576 00:11:57.468 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=ceb3aee0-11ef-4717-a337-fee326499e61 00:11:57.468 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:57.468 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:57.468 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:57.468 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ceb3aee0-11ef-4717-a337-fee326499e61 lbd_1 10 00:11:57.725 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3c82716e-fc63-4604-8bd6-afcc14c9b4b8 00:11:57.725 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3c82716e-fc63-4604-8bd6-afcc14c9b4b8:0 ' 00:11:57.725 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:57.725 05:02:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ceb3aee0-11ef-4717-a337-fee326499e61 lbd_2 10 00:11:57.983 05:02:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=956f593f-191b-475f-a3b8-1f496c0e7d17 00:11:57.983 05:02:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='956f593f-191b-475f-a3b8-1f496c0e7d17:1 ' 00:11:57.983 05:02:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:57.983 05:02:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ceb3aee0-11ef-4717-a337-fee326499e61 lbd_3 10 00:11:58.253 05:02:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=267cf22b-c42b-48f7-a28e-6f2201c44104 00:11:58.253 05:02:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='267cf22b-c42b-48f7-a28e-6f2201c44104:2 ' 00:11:58.253 05:02:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:58.253 05:02:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ceb3aee0-11ef-4717-a337-fee326499e61 lbd_4 10 00:11:58.528 05:02:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c954f3e7-320e-4a7c-86ab-311a7c595932 00:11:58.529 05:02:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c954f3e7-320e-4a7c-86ab-311a7c595932:3 ' 00:11:58.529 05:02:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:58.529 05:02:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ceb3aee0-11ef-4717-a337-fee326499e61 lbd_5 10 00:11:59.095 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=1862109a-911d-4274-b6ec-a90b1e4013d9 00:11:59.095 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='1862109a-911d-4274-b6ec-a90b1e4013d9:4 ' 00:11:59.095 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:59.095 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ceb3aee0-11ef-4717-a337-fee326499e61 lbd_6 10 00:11:59.095 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3d1c39f2-e34e-44df-bb5d-03a560d40e3c 00:11:59.095 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3d1c39f2-e34e-44df-bb5d-03a560d40e3c:5 ' 00:11:59.095 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:59.095 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ceb3aee0-11ef-4717-a337-fee326499e61 lbd_7 10 00:11:59.353 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4be0b968-2d59-4c63-8f11-a9bd0d1a3896 00:11:59.353 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4be0b968-2d59-4c63-8f11-a9bd0d1a3896:6 ' 00:11:59.353 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:59.353 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ceb3aee0-11ef-4717-a337-fee326499e61 lbd_8 10 00:11:59.611 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=62b4bad2-9da1-4bf4-9631-b3004af36379 00:11:59.611 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='62b4bad2-9da1-4bf4-9631-b3004af36379:7 ' 00:11:59.611 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:59.611 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ceb3aee0-11ef-4717-a337-fee326499e61 lbd_9 10 00:11:59.869 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=40809d1a-844a-4a3c-a2f4-62efb1f329d4 00:11:59.869 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='40809d1a-844a-4a3c-a2f4-62efb1f329d4:8 ' 00:11:59.869 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:59.869 05:02:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ceb3aee0-11ef-4717-a337-fee326499e61 lbd_10 10 00:12:00.127 05:03:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f2dcb03a-30b6-46bc-a5e3-b7ab561487cd 00:12:00.127 05:03:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f2dcb03a-30b6-46bc-a5e3-b7ab561487cd:9 ' 00:12:00.127 05:03:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias '3c82716e-fc63-4604-8bd6-afcc14c9b4b8:0 956f593f-191b-475f-a3b8-1f496c0e7d17:1 267cf22b-c42b-48f7-a28e-6f2201c44104:2 c954f3e7-320e-4a7c-86ab-311a7c595932:3 1862109a-911d-4274-b6ec-a90b1e4013d9:4 3d1c39f2-e34e-44df-bb5d-03a560d40e3c:5 4be0b968-2d59-4c63-8f11-a9bd0d1a3896:6 62b4bad2-9da1-4bf4-9631-b3004af36379:7 40809d1a-844a-4a3c-a2f4-62efb1f329d4:8 f2dcb03a-30b6-46bc-a5e3-b7ab561487cd:9 ' 1:8 256 -d 00:12:00.384 05:03:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:12:00.384 05:03:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=9 00:12:00.384 05:03:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 9 ANY 10.0.0.2/32 00:12:00.641 05:03:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 7 -eq 1 ']' 00:12:00.641 05:03:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:12:00.899 05:03:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc7 00:12:00.899 05:03:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc7 lvs_7 -c 1048576 00:12:01.157 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=7260844a-3d7a-4214-bd11-f9ce3324059b 00:12:01.157 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:12:01.157 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:12:01.157 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:01.157 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7260844a-3d7a-4214-bd11-f9ce3324059b lbd_1 10 00:12:01.415 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e828eae4-f4d3-4984-af89-63f8091671ae 00:12:01.415 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e828eae4-f4d3-4984-af89-63f8091671ae:0 ' 00:12:01.415 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:01.415 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7260844a-3d7a-4214-bd11-f9ce3324059b lbd_2 10 00:12:01.674 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=dec39c3c-6207-4ad4-9ff6-a55d78816be8 00:12:01.674 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='dec39c3c-6207-4ad4-9ff6-a55d78816be8:1 ' 00:12:01.674 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:01.674 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7260844a-3d7a-4214-bd11-f9ce3324059b lbd_3 10 00:12:01.932 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a4410611-fad1-423e-91b0-9126275578f5 00:12:01.932 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a4410611-fad1-423e-91b0-9126275578f5:2 ' 00:12:01.932 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:01.932 05:03:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7260844a-3d7a-4214-bd11-f9ce3324059b lbd_4 10 00:12:02.213 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=65981c96-cae9-4067-b726-29f5c34d0fed 00:12:02.213 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='65981c96-cae9-4067-b726-29f5c34d0fed:3 ' 00:12:02.213 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:02.214 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7260844a-3d7a-4214-bd11-f9ce3324059b lbd_5 10 00:12:02.472 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e15fd2ab-4067-49f5-9256-333cdb726353 00:12:02.472 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e15fd2ab-4067-49f5-9256-333cdb726353:4 ' 00:12:02.472 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:02.472 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7260844a-3d7a-4214-bd11-f9ce3324059b lbd_6 10 00:12:02.472 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f7178711-8b02-4294-bf65-99ee01dd7b10 00:12:02.472 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f7178711-8b02-4294-bf65-99ee01dd7b10:5 ' 00:12:02.472 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:02.472 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7260844a-3d7a-4214-bd11-f9ce3324059b lbd_7 10 00:12:03.040 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b6cc0372-9728-4434-85b1-2b6bfc08fdcf 00:12:03.040 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b6cc0372-9728-4434-85b1-2b6bfc08fdcf:6 ' 00:12:03.040 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:03.040 05:03:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7260844a-3d7a-4214-bd11-f9ce3324059b lbd_8 10 00:12:03.040 05:03:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b6e19db2-c897-4b6c-95d3-81e4043eb577 00:12:03.040 05:03:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b6e19db2-c897-4b6c-95d3-81e4043eb577:7 ' 00:12:03.040 05:03:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:03.040 05:03:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7260844a-3d7a-4214-bd11-f9ce3324059b lbd_9 10 00:12:03.298 05:03:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=91a6008d-1901-412b-9c76-025f3513b240 00:12:03.298 05:03:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='91a6008d-1901-412b-9c76-025f3513b240:8 ' 00:12:03.298 05:03:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:03.298 05:03:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7260844a-3d7a-4214-bd11-f9ce3324059b lbd_10 10 00:12:03.863 05:03:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f5a1f4b4-008a-4f51-ab1e-6319b1970ae7 00:12:03.863 05:03:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f5a1f4b4-008a-4f51-ab1e-6319b1970ae7:9 ' 00:12:03.863 05:03:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias 'e828eae4-f4d3-4984-af89-63f8091671ae:0 dec39c3c-6207-4ad4-9ff6-a55d78816be8:1 a4410611-fad1-423e-91b0-9126275578f5:2 65981c96-cae9-4067-b726-29f5c34d0fed:3 e15fd2ab-4067-49f5-9256-333cdb726353:4 f7178711-8b02-4294-bf65-99ee01dd7b10:5 b6cc0372-9728-4434-85b1-2b6bfc08fdcf:6 b6e19db2-c897-4b6c-95d3-81e4043eb577:7 91a6008d-1901-412b-9c76-025f3513b240:8 f5a1f4b4-008a-4f51-ab1e-6319b1970ae7:9 ' 1:9 256 -d 00:12:03.863 05:03:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:12:03.863 05:03:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=10 00:12:03.863 05:03:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 10 ANY 10.0.0.2/32 00:12:04.121 05:03:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 8 -eq 1 ']' 00:12:04.121 05:03:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:12:04.380 05:03:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc8 00:12:04.380 05:03:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc8 lvs_8 -c 1048576 00:12:04.639 05:03:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=9dd2d220-6523-4d1b-8b08-6c243a393237 00:12:04.639 05:03:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:12:04.639 05:03:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:12:04.639 05:03:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:04.639 05:03:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9dd2d220-6523-4d1b-8b08-6c243a393237 lbd_1 10 00:12:04.898 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=cd7f5622-3b1d-4746-bbd6-fea213b0320f 00:12:04.898 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='cd7f5622-3b1d-4746-bbd6-fea213b0320f:0 ' 00:12:04.898 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:04.898 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9dd2d220-6523-4d1b-8b08-6c243a393237 lbd_2 10 00:12:05.156 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=35ed42aa-c568-4ee5-a263-eb6a964c7ab8 00:12:05.156 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='35ed42aa-c568-4ee5-a263-eb6a964c7ab8:1 ' 00:12:05.156 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:05.156 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9dd2d220-6523-4d1b-8b08-6c243a393237 lbd_3 10 00:12:05.415 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5af4a612-57bd-4895-b1d9-8537e5db9295 00:12:05.415 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5af4a612-57bd-4895-b1d9-8537e5db9295:2 ' 00:12:05.415 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:05.415 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9dd2d220-6523-4d1b-8b08-6c243a393237 lbd_4 10 00:12:05.674 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=bbfcc7c7-0479-4a04-8be7-3172c101c59d 00:12:05.674 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='bbfcc7c7-0479-4a04-8be7-3172c101c59d:3 ' 00:12:05.674 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:05.674 05:03:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9dd2d220-6523-4d1b-8b08-6c243a393237 lbd_5 10 00:12:05.932 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=87d84b12-5830-4613-8776-2a1ea8cc7e29 00:12:05.932 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='87d84b12-5830-4613-8776-2a1ea8cc7e29:4 ' 00:12:05.932 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:05.932 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9dd2d220-6523-4d1b-8b08-6c243a393237 lbd_6 10 00:12:06.191 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f71a7fbe-461d-420e-b194-3687cf0350ef 00:12:06.191 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f71a7fbe-461d-420e-b194-3687cf0350ef:5 ' 00:12:06.191 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:06.191 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9dd2d220-6523-4d1b-8b08-6c243a393237 lbd_7 10 00:12:06.449 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=07bfbee3-2df5-4fc8-b28e-f01b4690c0a2 00:12:06.449 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='07bfbee3-2df5-4fc8-b28e-f01b4690c0a2:6 ' 00:12:06.449 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:06.449 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9dd2d220-6523-4d1b-8b08-6c243a393237 lbd_8 10 00:12:06.709 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ac9304a5-5091-4f06-b2c7-3ff5139cf8d9 00:12:06.710 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ac9304a5-5091-4f06-b2c7-3ff5139cf8d9:7 ' 00:12:06.710 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:06.710 05:03:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9dd2d220-6523-4d1b-8b08-6c243a393237 lbd_9 10 00:12:06.986 05:03:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=efaabad6-f69e-4373-95c5-1b78c067fc88 00:12:06.986 05:03:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='efaabad6-f69e-4373-95c5-1b78c067fc88:8 ' 00:12:06.986 05:03:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:06.986 05:03:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9dd2d220-6523-4d1b-8b08-6c243a393237 lbd_10 10 00:12:07.244 05:03:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c3ffe69c-2d36-4c02-8dec-69ec7659d5f8 00:12:07.244 05:03:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c3ffe69c-2d36-4c02-8dec-69ec7659d5f8:9 ' 00:12:07.244 05:03:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias 'cd7f5622-3b1d-4746-bbd6-fea213b0320f:0 35ed42aa-c568-4ee5-a263-eb6a964c7ab8:1 5af4a612-57bd-4895-b1d9-8537e5db9295:2 bbfcc7c7-0479-4a04-8be7-3172c101c59d:3 87d84b12-5830-4613-8776-2a1ea8cc7e29:4 f71a7fbe-461d-420e-b194-3687cf0350ef:5 07bfbee3-2df5-4fc8-b28e-f01b4690c0a2:6 ac9304a5-5091-4f06-b2c7-3ff5139cf8d9:7 efaabad6-f69e-4373-95c5-1b78c067fc88:8 c3ffe69c-2d36-4c02-8dec-69ec7659d5f8:9 ' 1:10 256 -d 00:12:07.501 05:03:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:12:07.501 05:03:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=11 00:12:07.501 05:03:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 11 ANY 10.0.0.2/32 00:12:07.759 05:03:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 9 -eq 1 ']' 00:12:07.759 05:03:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:12:08.327 05:03:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc9 00:12:08.327 05:03:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc9 lvs_9 -c 1048576 00:12:08.327 05:03:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=def39b3d-9d61-4e22-b1ce-90210ec60774 00:12:08.327 05:03:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:12:08.327 05:03:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:12:08.327 05:03:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:08.327 05:03:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u def39b3d-9d61-4e22-b1ce-90210ec60774 lbd_1 10 00:12:08.893 05:03:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=72d50add-3463-4764-809d-227d08fb95c6 00:12:08.893 05:03:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='72d50add-3463-4764-809d-227d08fb95c6:0 ' 00:12:08.893 05:03:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:08.893 05:03:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u def39b3d-9d61-4e22-b1ce-90210ec60774 lbd_2 10 00:12:08.893 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=46eca01d-2772-4535-bf3a-dcfd5d390700 00:12:08.893 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='46eca01d-2772-4535-bf3a-dcfd5d390700:1 ' 00:12:08.893 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:08.893 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u def39b3d-9d61-4e22-b1ce-90210ec60774 lbd_3 10 00:12:09.152 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=30492690-419b-4a92-9a45-ca5db92f1714 00:12:09.152 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='30492690-419b-4a92-9a45-ca5db92f1714:2 ' 00:12:09.152 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:09.152 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u def39b3d-9d61-4e22-b1ce-90210ec60774 lbd_4 10 00:12:09.410 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3990a5b9-3fd3-44e7-b4af-30e048a24e21 00:12:09.410 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3990a5b9-3fd3-44e7-b4af-30e048a24e21:3 ' 00:12:09.410 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:09.410 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u def39b3d-9d61-4e22-b1ce-90210ec60774 lbd_5 10 00:12:09.668 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8acc9aa4-9a17-435b-97cb-0a8e7c744784 00:12:09.668 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8acc9aa4-9a17-435b-97cb-0a8e7c744784:4 ' 00:12:09.668 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:09.668 05:03:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u def39b3d-9d61-4e22-b1ce-90210ec60774 lbd_6 10 00:12:09.926 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=afbc8eb3-c608-4b2a-b2d1-e3b5d82ccd3c 00:12:09.926 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='afbc8eb3-c608-4b2a-b2d1-e3b5d82ccd3c:5 ' 00:12:09.926 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:09.926 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u def39b3d-9d61-4e22-b1ce-90210ec60774 lbd_7 10 00:12:10.184 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=30d1f3c8-bc2a-46dc-81a4-e488c1f2c36e 00:12:10.184 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='30d1f3c8-bc2a-46dc-81a4-e488c1f2c36e:6 ' 00:12:10.184 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:10.184 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u def39b3d-9d61-4e22-b1ce-90210ec60774 lbd_8 10 00:12:10.443 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4b2b0e3c-e3ec-4c31-8bcd-9f9fc0390574 00:12:10.443 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4b2b0e3c-e3ec-4c31-8bcd-9f9fc0390574:7 ' 00:12:10.443 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:10.443 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u def39b3d-9d61-4e22-b1ce-90210ec60774 lbd_9 10 00:12:10.702 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e56f453c-7124-4138-8419-ecc5c027167f 00:12:10.702 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e56f453c-7124-4138-8419-ecc5c027167f:8 ' 00:12:10.702 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:10.702 05:03:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u def39b3d-9d61-4e22-b1ce-90210ec60774 lbd_10 10 00:12:10.959 05:03:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6ae35f64-ef67-4180-8170-d753ad15ad7f 00:12:10.959 05:03:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6ae35f64-ef67-4180-8170-d753ad15ad7f:9 ' 00:12:10.959 05:03:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias '72d50add-3463-4764-809d-227d08fb95c6:0 46eca01d-2772-4535-bf3a-dcfd5d390700:1 30492690-419b-4a92-9a45-ca5db92f1714:2 3990a5b9-3fd3-44e7-b4af-30e048a24e21:3 8acc9aa4-9a17-435b-97cb-0a8e7c744784:4 afbc8eb3-c608-4b2a-b2d1-e3b5d82ccd3c:5 30d1f3c8-bc2a-46dc-81a4-e488c1f2c36e:6 4b2b0e3c-e3ec-4c31-8bcd-9f9fc0390574:7 e56f453c-7124-4138-8419-ecc5c027167f:8 6ae35f64-ef67-4180-8170-d753ad15ad7f:9 ' 1:11 256 -d 00:12:11.217 05:03:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:12:11.217 05:03:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=12 00:12:11.217 05:03:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 12 ANY 10.0.0.2/32 00:12:11.474 05:03:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 10 -eq 1 ']' 00:12:11.474 05:03:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:12:11.733 05:03:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc10 00:12:11.733 05:03:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc10 lvs_10 -c 1048576 00:12:11.991 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=5bb13c9d-4cb6-4333-89c0-dd53190f9b3e 00:12:11.991 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:12:11.991 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:12:11.991 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:11.991 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5bb13c9d-4cb6-4333-89c0-dd53190f9b3e lbd_1 10 00:12:12.249 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=dbeb6404-c24f-4a82-9046-d55c813cde2e 00:12:12.249 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='dbeb6404-c24f-4a82-9046-d55c813cde2e:0 ' 00:12:12.249 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:12.249 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5bb13c9d-4cb6-4333-89c0-dd53190f9b3e lbd_2 10 00:12:12.508 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f59ee2fe-5e4d-4a48-8da1-6d1f6af5c1e0 00:12:12.508 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f59ee2fe-5e4d-4a48-8da1-6d1f6af5c1e0:1 ' 00:12:12.508 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:12.508 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5bb13c9d-4cb6-4333-89c0-dd53190f9b3e lbd_3 10 00:12:12.766 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ea9cbb06-37cb-489a-aad2-af08c767f2cf 00:12:12.767 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ea9cbb06-37cb-489a-aad2-af08c767f2cf:2 ' 00:12:12.767 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:12.767 05:03:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5bb13c9d-4cb6-4333-89c0-dd53190f9b3e lbd_4 10 00:12:13.026 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5591d424-bde7-4f0d-a1ad-b612a636a0b0 00:12:13.026 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5591d424-bde7-4f0d-a1ad-b612a636a0b0:3 ' 00:12:13.026 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:13.026 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5bb13c9d-4cb6-4333-89c0-dd53190f9b3e lbd_5 10 00:12:13.284 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=343dc214-fea3-4e0c-bb97-54748851e7ec 00:12:13.284 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='343dc214-fea3-4e0c-bb97-54748851e7ec:4 ' 00:12:13.284 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:13.284 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5bb13c9d-4cb6-4333-89c0-dd53190f9b3e lbd_6 10 00:12:13.541 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=afb07fa0-32d9-479b-b58c-d258fef8f7ef 00:12:13.541 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='afb07fa0-32d9-479b-b58c-d258fef8f7ef:5 ' 00:12:13.541 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:13.541 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5bb13c9d-4cb6-4333-89c0-dd53190f9b3e lbd_7 10 00:12:13.799 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=64c793ca-8b20-4d14-81c9-fa56f3c6a391 00:12:13.799 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='64c793ca-8b20-4d14-81c9-fa56f3c6a391:6 ' 00:12:13.799 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:13.799 05:03:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5bb13c9d-4cb6-4333-89c0-dd53190f9b3e lbd_8 10 00:12:14.058 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=927a86ec-672f-4dd1-ba93-bdb391b1b296 00:12:14.058 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='927a86ec-672f-4dd1-ba93-bdb391b1b296:7 ' 00:12:14.058 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:14.058 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5bb13c9d-4cb6-4333-89c0-dd53190f9b3e lbd_9 10 00:12:14.316 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8c245e6f-cc35-412f-b2ac-4b660742e671 00:12:14.316 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8c245e6f-cc35-412f-b2ac-4b660742e671:8 ' 00:12:14.316 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:14.316 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5bb13c9d-4cb6-4333-89c0-dd53190f9b3e lbd_10 10 00:12:14.575 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7bc9abdd-4fa3-40f1-9422-f5c810378d47 00:12:14.575 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7bc9abdd-4fa3-40f1-9422-f5c810378d47:9 ' 00:12:14.575 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias 'dbeb6404-c24f-4a82-9046-d55c813cde2e:0 f59ee2fe-5e4d-4a48-8da1-6d1f6af5c1e0:1 ea9cbb06-37cb-489a-aad2-af08c767f2cf:2 5591d424-bde7-4f0d-a1ad-b612a636a0b0:3 343dc214-fea3-4e0c-bb97-54748851e7ec:4 afb07fa0-32d9-479b-b58c-d258fef8f7ef:5 64c793ca-8b20-4d14-81c9-fa56f3c6a391:6 927a86ec-672f-4dd1-ba93-bdb391b1b296:7 8c245e6f-cc35-412f-b2ac-4b660742e671:8 7bc9abdd-4fa3-40f1-9422-f5c810378d47:9 ' 1:12 256 -d 00:12:14.833 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@66 -- # timing_exit setup 00:12:14.833 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:14.833 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:14.833 05:03:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@68 -- # sleep 1 00:12:15.767 05:03:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@70 -- # timing_enter discovery 00:12:15.767 05:03:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:15.767 05:03:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:15.767 05:03:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@71 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:15.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:12:15.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:12:15.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:12:15.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:12:15.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:12:15.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:12:15.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:12:15.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:12:15.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:12:15.767 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:12:15.767 05:03:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@72 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:15.767 [2024-07-23 05:03:15.972573] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:15.990577] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:15.994429] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.027729] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.032404] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.064026] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.091870] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.092377] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.094431] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.119529] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.134005] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.153585] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.164147] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.168638] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.169817] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.202432] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.207586] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.025 [2024-07-23 05:03:16.236302] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.246366] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.250363] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.262603] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.282755] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.287870] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.296088] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.324595] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.336582] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.362224] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.376434] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.383328] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.416899] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.420877] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.426275] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.433898] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.467828] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.472043] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.283 [2024-07-23 05:03:16.476988] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.541 [2024-07-23 05:03:16.573288] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.541 [2024-07-23 05:03:16.605900] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.541 [2024-07-23 05:03:16.716731] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.784209] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.799128] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.839516] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.841084] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.842705] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.898785] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.907074] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.934799] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.936244] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.938869] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.943950] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.975944] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.982433] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:16.799 [2024-07-23 05:03:16.994422] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.057 [2024-07-23 05:03:17.028411] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.057 [2024-07-23 05:03:17.032546] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.057 [2024-07-23 05:03:17.070367] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.057 [2024-07-23 05:03:17.118201] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.057 [2024-07-23 05:03:17.123363] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.057 [2024-07-23 05:03:17.148487] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.057 [2024-07-23 05:03:17.262746] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.313835] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.335170] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.352622] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.354137] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.375201] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.376243] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.420368] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.431776] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.432176] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.448801] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.474864] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.475839] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.496548] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.508897] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.522728] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.315 [2024-07-23 05:03:17.522825] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.533967] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.557704] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.565675] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.576780] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.589583] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.589604] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.603590] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.644703] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.653590] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.664324] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.664742] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.668056] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.693465] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.695388] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.730599] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.733651] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.751583] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.573 [2024-07-23 05:03:17.783916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.832 [2024-07-23 05:03:17.807289] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.832 [2024-07-23 05:03:17.835895] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.832 [2024-07-23 05:03:17.857422] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.832 [2024-07-23 05:03:17.859440] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.832 [2024-07-23 05:03:17.878527] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.832 [2024-07-23 05:03:17.907055] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:17.832 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:12:17.832 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:12:17.832 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:17.832 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:12:17.832 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:12:17.832 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:12:17.832 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:12:17.832 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:12:17.832 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:12:17.832 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:12:17.832 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:12:17.832 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:12:17.832 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:17.832 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:12:17.832 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:12:17.832 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:12:17.832 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:12:17.832 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:12:17.832 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:12:17.832 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:12:17.832 05:03:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@73 -- # waitforiscsidevices 100 00:12:17.832 05:03:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@116 -- # local num=100 00:12:17.832 05:03:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:17.832 05:03:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:17.832 05:03:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:17.832 05:03:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:18.091 05:03:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # n=100 00:12:18.091 05:03:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@120 -- # '[' 100 -ne 100 ']' 00:12:18.091 05:03:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@123 -- # return 0 00:12:18.091 05:03:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@74 -- # timing_exit discovery 00:12:18.091 05:03:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:18.091 05:03:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:18.091 05:03:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@76 -- # timing_enter fio 00:12:18.091 05:03:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:18.091 05:03:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:18.091 05:03:18 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 8 -t randwrite -r 10 -v 00:12:18.091 [global] 00:12:18.091 thread=1 00:12:18.091 invalidate=1 00:12:18.091 rw=randwrite 00:12:18.091 time_based=1 00:12:18.091 runtime=10 00:12:18.091 ioengine=libaio 00:12:18.091 direct=1 00:12:18.091 bs=131072 00:12:18.091 iodepth=8 00:12:18.091 norandommap=0 00:12:18.091 numjobs=1 00:12:18.091 00:12:18.091 verify_dump=1 00:12:18.091 verify_backlog=512 00:12:18.091 verify_state_save=0 00:12:18.091 do_verify=1 00:12:18.091 verify=crc32c-intel 00:12:18.091 [job0] 00:12:18.091 filename=/dev/sdb 00:12:18.091 [job1] 00:12:18.091 filename=/dev/sdd 00:12:18.091 [job2] 00:12:18.091 filename=/dev/sde 00:12:18.091 [job3] 00:12:18.091 filename=/dev/sdj 00:12:18.091 [job4] 00:12:18.091 filename=/dev/sdn 00:12:18.091 [job5] 00:12:18.091 filename=/dev/sdr 00:12:18.091 [job6] 00:12:18.091 filename=/dev/sdv 00:12:18.091 [job7] 00:12:18.091 filename=/dev/sdx 00:12:18.091 [job8] 00:12:18.091 filename=/dev/sdaa 00:12:18.091 [job9] 00:12:18.091 filename=/dev/sdad 00:12:18.091 [job10] 00:12:18.091 filename=/dev/sdi 00:12:18.091 [job11] 00:12:18.091 filename=/dev/sdl 00:12:18.091 [job12] 00:12:18.091 filename=/dev/sdp 00:12:18.091 [job13] 00:12:18.091 filename=/dev/sds 00:12:18.091 [job14] 00:12:18.091 filename=/dev/sdz 00:12:18.091 [job15] 00:12:18.091 filename=/dev/sdaf 00:12:18.091 [job16] 00:12:18.091 filename=/dev/sdaj 00:12:18.091 [job17] 00:12:18.091 filename=/dev/sdal 00:12:18.091 [job18] 00:12:18.091 filename=/dev/sdam 00:12:18.091 [job19] 00:12:18.091 filename=/dev/sdan 00:12:18.091 [job20] 00:12:18.091 filename=/dev/sdh 00:12:18.091 [job21] 00:12:18.091 filename=/dev/sdk 00:12:18.091 [job22] 00:12:18.091 filename=/dev/sdo 00:12:18.091 [job23] 00:12:18.091 filename=/dev/sdt 00:12:18.091 [job24] 00:12:18.091 filename=/dev/sdw 00:12:18.091 [job25] 00:12:18.091 filename=/dev/sdac 00:12:18.091 [job26] 00:12:18.091 filename=/dev/sdae 00:12:18.091 [job27] 00:12:18.091 filename=/dev/sdah 00:12:18.091 [job28] 00:12:18.091 filename=/dev/sdai 00:12:18.091 [job29] 00:12:18.091 filename=/dev/sdak 00:12:18.091 [job30] 00:12:18.091 filename=/dev/sdap 00:12:18.091 [job31] 00:12:18.091 filename=/dev/sdar 00:12:18.091 [job32] 00:12:18.091 filename=/dev/sdau 00:12:18.091 [job33] 00:12:18.091 filename=/dev/sdav 00:12:18.091 [job34] 00:12:18.091 filename=/dev/sday 00:12:18.091 [job35] 00:12:18.091 filename=/dev/sdbb 00:12:18.091 [job36] 00:12:18.091 filename=/dev/sdbd 00:12:18.091 [job37] 00:12:18.091 filename=/dev/sdbf 00:12:18.091 [job38] 00:12:18.091 filename=/dev/sdbg 00:12:18.091 [job39] 00:12:18.091 filename=/dev/sdbh 00:12:18.091 [job40] 00:12:18.091 filename=/dev/sdao 00:12:18.091 [job41] 00:12:18.091 filename=/dev/sdaq 00:12:18.091 [job42] 00:12:18.091 filename=/dev/sdas 00:12:18.091 [job43] 00:12:18.091 filename=/dev/sdat 00:12:18.091 [job44] 00:12:18.091 filename=/dev/sdaw 00:12:18.091 [job45] 00:12:18.091 filename=/dev/sdax 00:12:18.091 [job46] 00:12:18.091 filename=/dev/sdaz 00:12:18.091 [job47] 00:12:18.091 filename=/dev/sdba 00:12:18.091 [job48] 00:12:18.091 filename=/dev/sdbc 00:12:18.091 [job49] 00:12:18.091 filename=/dev/sdbe 00:12:18.092 [job50] 00:12:18.092 filename=/dev/sdbi 00:12:18.092 [job51] 00:12:18.092 filename=/dev/sdbk 00:12:18.092 [job52] 00:12:18.092 filename=/dev/sdbn 00:12:18.092 [job53] 00:12:18.092 filename=/dev/sdbs 00:12:18.092 [job54] 00:12:18.092 filename=/dev/sdbw 00:12:18.092 [job55] 00:12:18.092 filename=/dev/sdcc 00:12:18.092 [job56] 00:12:18.092 filename=/dev/sdcf 00:12:18.092 [job57] 00:12:18.092 filename=/dev/sdci 00:12:18.092 [job58] 00:12:18.092 filename=/dev/sdcm 00:12:18.092 [job59] 00:12:18.092 filename=/dev/sdcq 00:12:18.092 [job60] 00:12:18.092 filename=/dev/sdbj 00:12:18.092 [job61] 00:12:18.092 filename=/dev/sdbm 00:12:18.092 [job62] 00:12:18.092 filename=/dev/sdbo 00:12:18.092 [job63] 00:12:18.092 filename=/dev/sdbq 00:12:18.092 [job64] 00:12:18.092 filename=/dev/sdbu 00:12:18.092 [job65] 00:12:18.092 filename=/dev/sdby 00:12:18.092 [job66] 00:12:18.092 filename=/dev/sdbz 00:12:18.092 [job67] 00:12:18.092 filename=/dev/sdcd 00:12:18.092 [job68] 00:12:18.092 filename=/dev/sdcg 00:12:18.092 [job69] 00:12:18.092 filename=/dev/sdck 00:12:18.092 [job70] 00:12:18.092 filename=/dev/sdbl 00:12:18.351 [job71] 00:12:18.351 filename=/dev/sdbt 00:12:18.351 [job72] 00:12:18.351 filename=/dev/sdbx 00:12:18.351 [job73] 00:12:18.351 filename=/dev/sdcb 00:12:18.351 [job74] 00:12:18.351 filename=/dev/sdch 00:12:18.351 [job75] 00:12:18.351 filename=/dev/sdcl 00:12:18.351 [job76] 00:12:18.351 filename=/dev/sdco 00:12:18.351 [job77] 00:12:18.351 filename=/dev/sdcr 00:12:18.351 [job78] 00:12:18.351 filename=/dev/sdcu 00:12:18.351 [job79] 00:12:18.351 filename=/dev/sdcv 00:12:18.351 [job80] 00:12:18.351 filename=/dev/sdbp 00:12:18.351 [job81] 00:12:18.351 filename=/dev/sdbr 00:12:18.351 [job82] 00:12:18.351 filename=/dev/sdbv 00:12:18.351 [job83] 00:12:18.351 filename=/dev/sdca 00:12:18.351 [job84] 00:12:18.351 filename=/dev/sdce 00:12:18.351 [job85] 00:12:18.351 filename=/dev/sdcj 00:12:18.351 [job86] 00:12:18.351 filename=/dev/sdcn 00:12:18.351 [job87] 00:12:18.351 filename=/dev/sdcp 00:12:18.351 [job88] 00:12:18.351 filename=/dev/sdcs 00:12:18.351 [job89] 00:12:18.351 filename=/dev/sdct 00:12:18.351 [job90] 00:12:18.351 filename=/dev/sda 00:12:18.351 [job91] 00:12:18.351 filename=/dev/sdc 00:12:18.351 [job92] 00:12:18.351 filename=/dev/sdf 00:12:18.351 [job93] 00:12:18.351 filename=/dev/sdg 00:12:18.351 [job94] 00:12:18.351 filename=/dev/sdm 00:12:18.351 [job95] 00:12:18.351 filename=/dev/sdq 00:12:18.351 [job96] 00:12:18.351 filename=/dev/sdu 00:12:18.351 [job97] 00:12:18.351 filename=/dev/sdy 00:12:18.351 [job98] 00:12:18.351 filename=/dev/sdab 00:12:18.351 [job99] 00:12:18.351 filename=/dev/sdag 00:12:19.726 queue_depth set to 113 (sdb) 00:12:19.726 queue_depth set to 113 (sdd) 00:12:19.726 queue_depth set to 113 (sde) 00:12:19.726 queue_depth set to 113 (sdj) 00:12:19.726 queue_depth set to 113 (sdn) 00:12:19.726 queue_depth set to 113 (sdr) 00:12:19.726 queue_depth set to 113 (sdv) 00:12:19.726 queue_depth set to 113 (sdx) 00:12:19.726 queue_depth set to 113 (sdaa) 00:12:19.726 queue_depth set to 113 (sdad) 00:12:19.726 queue_depth set to 113 (sdi) 00:12:19.726 queue_depth set to 113 (sdl) 00:12:19.726 queue_depth set to 113 (sdp) 00:12:19.726 queue_depth set to 113 (sds) 00:12:19.726 queue_depth set to 113 (sdz) 00:12:19.726 queue_depth set to 113 (sdaf) 00:12:19.727 queue_depth set to 113 (sdaj) 00:12:19.988 queue_depth set to 113 (sdal) 00:12:19.988 queue_depth set to 113 (sdam) 00:12:19.988 queue_depth set to 113 (sdan) 00:12:19.988 queue_depth set to 113 (sdh) 00:12:19.988 queue_depth set to 113 (sdk) 00:12:19.988 queue_depth set to 113 (sdo) 00:12:19.988 queue_depth set to 113 (sdt) 00:12:19.988 queue_depth set to 113 (sdw) 00:12:19.988 queue_depth set to 113 (sdac) 00:12:19.988 queue_depth set to 113 (sdae) 00:12:19.988 queue_depth set to 113 (sdah) 00:12:19.988 queue_depth set to 113 (sdai) 00:12:20.247 queue_depth set to 113 (sdak) 00:12:20.247 queue_depth set to 113 (sdap) 00:12:20.247 queue_depth set to 113 (sdar) 00:12:20.247 queue_depth set to 113 (sdau) 00:12:20.247 queue_depth set to 113 (sdav) 00:12:20.247 queue_depth set to 113 (sday) 00:12:20.247 queue_depth set to 113 (sdbb) 00:12:20.247 queue_depth set to 113 (sdbd) 00:12:20.247 queue_depth set to 113 (sdbf) 00:12:20.247 queue_depth set to 113 (sdbg) 00:12:20.247 queue_depth set to 113 (sdbh) 00:12:20.247 queue_depth set to 113 (sdao) 00:12:20.247 queue_depth set to 113 (sdaq) 00:12:20.505 queue_depth set to 113 (sdas) 00:12:20.505 queue_depth set to 113 (sdat) 00:12:20.505 queue_depth set to 113 (sdaw) 00:12:20.505 queue_depth set to 113 (sdax) 00:12:20.505 queue_depth set to 113 (sdaz) 00:12:20.505 queue_depth set to 113 (sdba) 00:12:20.505 queue_depth set to 113 (sdbc) 00:12:20.505 queue_depth set to 113 (sdbe) 00:12:20.505 queue_depth set to 113 (sdbi) 00:12:20.505 queue_depth set to 113 (sdbk) 00:12:20.505 queue_depth set to 113 (sdbn) 00:12:20.505 queue_depth set to 113 (sdbs) 00:12:20.505 queue_depth set to 113 (sdbw) 00:12:20.763 queue_depth set to 113 (sdcc) 00:12:20.763 queue_depth set to 113 (sdcf) 00:12:20.763 queue_depth set to 113 (sdci) 00:12:20.763 queue_depth set to 113 (sdcm) 00:12:20.763 queue_depth set to 113 (sdcq) 00:12:20.763 queue_depth set to 113 (sdbj) 00:12:20.763 queue_depth set to 113 (sdbm) 00:12:20.763 queue_depth set to 113 (sdbo) 00:12:20.763 queue_depth set to 113 (sdbq) 00:12:20.763 queue_depth set to 113 (sdbu) 00:12:20.763 queue_depth set to 113 (sdby) 00:12:20.763 queue_depth set to 113 (sdbz) 00:12:21.021 queue_depth set to 113 (sdcd) 00:12:21.021 queue_depth set to 113 (sdcg) 00:12:21.021 queue_depth set to 113 (sdck) 00:12:21.021 queue_depth set to 113 (sdbl) 00:12:21.021 queue_depth set to 113 (sdbt) 00:12:21.021 queue_depth set to 113 (sdbx) 00:12:21.021 queue_depth set to 113 (sdcb) 00:12:21.021 queue_depth set to 113 (sdch) 00:12:21.021 queue_depth set to 113 (sdcl) 00:12:21.021 queue_depth set to 113 (sdco) 00:12:21.021 queue_depth set to 113 (sdcr) 00:12:21.021 queue_depth set to 113 (sdcu) 00:12:21.021 queue_depth set to 113 (sdcv) 00:12:21.280 queue_depth set to 113 (sdbp) 00:12:21.280 queue_depth set to 113 (sdbr) 00:12:21.280 queue_depth set to 113 (sdbv) 00:12:21.280 queue_depth set to 113 (sdca) 00:12:21.280 queue_depth set to 113 (sdce) 00:12:21.280 queue_depth set to 113 (sdcj) 00:12:21.280 queue_depth set to 113 (sdcn) 00:12:21.280 queue_depth set to 113 (sdcp) 00:12:21.280 queue_depth set to 113 (sdcs) 00:12:21.280 queue_depth set to 113 (sdct) 00:12:21.280 queue_depth set to 113 (sda) 00:12:21.280 queue_depth set to 113 (sdc) 00:12:21.280 queue_depth set to 113 (sdf) 00:12:21.538 queue_depth set to 113 (sdg) 00:12:21.538 queue_depth set to 113 (sdm) 00:12:21.538 queue_depth set to 113 (sdq) 00:12:21.538 queue_depth set to 113 (sdu) 00:12:21.538 queue_depth set to 113 (sdy) 00:12:21.538 queue_depth set to 113 (sdab) 00:12:21.538 queue_depth set to 113 (sdag) 00:12:21.797 job0: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.797 job1: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.797 job2: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.797 job3: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.797 job4: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.797 job5: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job6: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job7: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job8: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job9: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job10: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job11: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job12: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job13: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job14: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job15: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job16: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job17: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job18: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job19: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job20: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job21: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job22: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job23: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job24: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job25: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job26: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job27: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job28: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job29: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job30: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job31: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job32: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job33: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job34: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job35: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job36: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job37: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job38: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job39: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job40: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job41: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job42: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job43: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job44: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job45: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job46: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job47: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job48: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job49: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job50: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job51: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job52: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job53: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job54: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job55: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job56: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job57: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job58: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job59: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job60: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job61: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job62: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job63: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job64: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job65: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job66: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job67: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job68: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job69: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job70: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job71: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job72: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job73: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job74: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job75: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job76: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job77: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job78: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job79: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job80: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job81: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job82: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job83: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job84: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job85: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job86: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job87: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job88: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job89: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job90: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job91: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job92: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job93: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job94: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job95: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job96: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job97: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job98: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 job99: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:21.798 fio-3.35 00:12:21.798 Starting 100 threads 00:12:21.798 [2024-07-23 05:03:21.981162] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.798 [2024-07-23 05:03:21.985148] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.798 [2024-07-23 05:03:21.988168] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.798 [2024-07-23 05:03:21.991088] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.798 [2024-07-23 05:03:21.993757] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.798 [2024-07-23 05:03:21.995629] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.799 [2024-07-23 05:03:21.997787] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.799 [2024-07-23 05:03:21.999709] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.799 [2024-07-23 05:03:22.001685] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.799 [2024-07-23 05:03:22.003600] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.799 [2024-07-23 05:03:22.006242] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.799 [2024-07-23 05:03:22.009015] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.799 [2024-07-23 05:03:22.011964] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.799 [2024-07-23 05:03:22.014920] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.018182] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.021172] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.024113] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.027290] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.030131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.033399] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.035589] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.038091] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.040831] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.046427] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.048531] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.050578] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.052571] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.054843] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.056996] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.059153] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.061954] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.064056] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.066131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.068143] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.070130] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.072095] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.073979] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.075951] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.078027] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.079903] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.084339] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.089584] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.091970] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.094329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.096717] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.098868] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.101229] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.103043] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.104992] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.106918] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.109277] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.112518] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.116882] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.119112] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.121846] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.124817] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.127500] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.130303] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.133162] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.136078] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.139549] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.141849] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.143890] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.146033] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.148088] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.150672] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.152700] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.154695] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.156681] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.158561] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.160614] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.162486] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.164347] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.166253] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.170885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.172995] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.174952] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.177295] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.179648] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.182146] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.184819] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.186891] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.189071] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.191044] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.193340] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.195369] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.197492] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.199476] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.202132] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.204026] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.205942] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.207941] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.209829] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.211670] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.213637] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.215642] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.217500] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.219349] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.221201] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.058 [2024-07-23 05:03:22.223504] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.279 [2024-07-23 05:03:26.342339] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.537 [2024-07-23 05:03:26.530788] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.537 [2024-07-23 05:03:26.620188] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.537 [2024-07-23 05:03:26.664627] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.538 [2024-07-23 05:03:26.728386] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.796 [2024-07-23 05:03:26.817151] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.796 [2024-07-23 05:03:26.882669] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.796 [2024-07-23 05:03:26.953760] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.054 [2024-07-23 05:03:27.015417] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.054 [2024-07-23 05:03:27.076427] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.054 [2024-07-23 05:03:27.166947] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.054 [2024-07-23 05:03:27.257794] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.313 [2024-07-23 05:03:27.324834] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.313 [2024-07-23 05:03:27.397080] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.313 [2024-07-23 05:03:27.486847] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.571 [2024-07-23 05:03:27.549774] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.571 [2024-07-23 05:03:27.609390] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.571 [2024-07-23 05:03:27.674408] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.571 [2024-07-23 05:03:27.782799] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.836 [2024-07-23 05:03:27.808419] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.836 [2024-07-23 05:03:27.845051] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.836 [2024-07-23 05:03:27.898131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.836 [2024-07-23 05:03:27.946676] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.836 [2024-07-23 05:03:27.987053] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.836 [2024-07-23 05:03:28.030181] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.095 [2024-07-23 05:03:28.061157] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.095 [2024-07-23 05:03:28.135499] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.095 [2024-07-23 05:03:28.179676] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.095 [2024-07-23 05:03:28.233975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.354 [2024-07-23 05:03:28.340348] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.354 [2024-07-23 05:03:28.399490] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.354 [2024-07-23 05:03:28.444305] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.354 [2024-07-23 05:03:28.511450] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.612 [2024-07-23 05:03:28.660689] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.612 [2024-07-23 05:03:28.707885] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.612 [2024-07-23 05:03:28.764108] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.612 [2024-07-23 05:03:28.811194] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.873 [2024-07-23 05:03:28.880037] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.873 [2024-07-23 05:03:28.918920] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.873 [2024-07-23 05:03:28.973631] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.873 [2024-07-23 05:03:29.043385] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.132 [2024-07-23 05:03:29.107326] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.132 [2024-07-23 05:03:29.144216] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.132 [2024-07-23 05:03:29.174850] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.132 [2024-07-23 05:03:29.225077] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.132 [2024-07-23 05:03:29.265324] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.132 [2024-07-23 05:03:29.311570] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.132 [2024-07-23 05:03:29.340898] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.390 [2024-07-23 05:03:29.476137] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.390 [2024-07-23 05:03:29.552500] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.649 [2024-07-23 05:03:29.613598] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.649 [2024-07-23 05:03:29.745783] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.649 [2024-07-23 05:03:29.835349] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.908 [2024-07-23 05:03:29.905027] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.908 [2024-07-23 05:03:30.091998] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.167 [2024-07-23 05:03:30.180643] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.167 [2024-07-23 05:03:30.225091] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.167 [2024-07-23 05:03:30.270187] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.167 [2024-07-23 05:03:30.335003] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.425 [2024-07-23 05:03:30.405623] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.425 [2024-07-23 05:03:30.509721] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.425 [2024-07-23 05:03:30.628106] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.689 [2024-07-23 05:03:30.713867] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.689 [2024-07-23 05:03:30.781795] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.689 [2024-07-23 05:03:30.817075] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.689 [2024-07-23 05:03:30.854905] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.689 [2024-07-23 05:03:30.892210] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.956 [2024-07-23 05:03:30.936211] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.956 [2024-07-23 05:03:30.983861] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.956 [2024-07-23 05:03:31.026408] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.956 [2024-07-23 05:03:31.075768] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:30.956 [2024-07-23 05:03:31.146695] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:31.213 [2024-07-23 05:03:31.221278] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:31.213 [2024-07-23 05:03:31.281903] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:31.213 [2024-07-23 05:03:31.323094] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:31.213 [2024-07-23 05:03:31.369058] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:31.479 [2024-07-23 05:03:31.453565] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:31.479 [2024-07-23 05:03:31.535069] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:31.479 [2024-07-23 05:03:31.623417] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:31.763 [2024-07-23 05:03:31.721362] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:31.763 [2024-07-23 05:03:31.783422] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:31.763 [2024-07-23 05:03:31.814816] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:31.763 [2024-07-23 05:03:31.863934] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:31.763 [2024-07-23 05:03:31.938421] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.021 [2024-07-23 05:03:31.990341] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.021 [2024-07-23 05:03:32.040566] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.021 [2024-07-23 05:03:32.101864] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.021 [2024-07-23 05:03:32.176329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.279 [2024-07-23 05:03:32.268825] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.279 [2024-07-23 05:03:32.362894] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.279 [2024-07-23 05:03:32.470911] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.537 [2024-07-23 05:03:32.525940] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.537 [2024-07-23 05:03:32.577261] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.537 [2024-07-23 05:03:32.657938] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.537 [2024-07-23 05:03:32.691954] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.537 [2024-07-23 05:03:32.740730] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.795 [2024-07-23 05:03:32.779620] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.795 [2024-07-23 05:03:32.930853] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:32.795 [2024-07-23 05:03:32.976002] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.054 [2024-07-23 05:03:33.054747] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.054 [2024-07-23 05:03:33.099246] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.054 [2024-07-23 05:03:33.150295] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.054 [2024-07-23 05:03:33.208878] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.054 [2024-07-23 05:03:33.265050] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.311 [2024-07-23 05:03:33.331736] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.311 [2024-07-23 05:03:33.412767] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.311 [2024-07-23 05:03:33.460269] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.311 [2024-07-23 05:03:33.522760] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.569 [2024-07-23 05:03:33.602673] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.569 [2024-07-23 05:03:33.669190] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.569 [2024-07-23 05:03:33.708862] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.569 [2024-07-23 05:03:33.783743] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.828 [2024-07-23 05:03:33.898273] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:33.828 [2024-07-23 05:03:33.961418] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.111 [2024-07-23 05:03:34.095910] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.111 [2024-07-23 05:03:34.139805] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.111 [2024-07-23 05:03:34.218614] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.111 [2024-07-23 05:03:34.268273] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.369 [2024-07-23 05:03:34.357489] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.369 [2024-07-23 05:03:34.429226] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.369 [2024-07-23 05:03:34.494094] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.628 [2024-07-23 05:03:34.664153] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.628 [2024-07-23 05:03:34.757527] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.628 [2024-07-23 05:03:34.793140] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.628 [2024-07-23 05:03:34.836207] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.887 [2024-07-23 05:03:34.932663] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:34.887 [2024-07-23 05:03:35.031919] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.147 [2024-07-23 05:03:35.123571] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.147 [2024-07-23 05:03:35.220211] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.147 [2024-07-23 05:03:35.338529] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.715 [2024-07-23 05:03:35.680175] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.715 [2024-07-23 05:03:35.814569] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.715 [2024-07-23 05:03:35.919704] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:35.959392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:35.974576] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:35.978501] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:35.983087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:35.985388] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:35.987432] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:35.989585] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:35.991827] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:35.994241] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:35.996136] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:35.998114] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.000422] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.002523] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.005065] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.006893] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.008828] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.010850] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.013003] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.016546] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.019328] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.021698] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.024000] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.027245] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.029231] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.031293] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 [2024-07-23 05:03:36.033287] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:35.984 00:12:35.984 job0: (groupid=0, jobs=1): err= 0: pid=81120: Tue Jul 23 05:03:36 2024 00:12:35.984 read: IOPS=78, BW=9.78MiB/s (10.2MB/s)(88.5MiB/9053msec) 00:12:35.984 slat (usec): min=6, max=1564, avg=68.36, stdev=136.77 00:12:35.984 clat (msec): min=3, max=202, avg=17.32, stdev=19.97 00:12:35.984 lat (msec): min=3, max=202, avg=17.39, stdev=19.96 00:12:35.984 clat percentiles (msec): 00:12:35.984 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:12:35.984 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:12:35.984 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 28], 95.00th=[ 43], 00:12:35.984 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 203], 99.95th=[ 203], 00:12:35.984 | 99.99th=[ 203] 00:12:35.984 write: IOPS=94, BW=11.8MiB/s (12.4MB/s)(100MiB/8462msec); 0 zone resets 00:12:35.984 slat (usec): min=38, max=19841, avg=181.46, stdev=780.83 00:12:35.984 clat (usec): min=848, max=366233, avg=83784.16, stdev=45660.13 00:12:35.984 lat (usec): min=931, max=366308, avg=83965.62, stdev=45652.09 00:12:35.984 clat percentiles (msec): 00:12:35.984 | 1.00th=[ 4], 5.00th=[ 27], 10.00th=[ 56], 20.00th=[ 59], 00:12:35.984 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 78], 00:12:35.984 | 70.00th=[ 92], 80.00th=[ 107], 90.00th=[ 136], 95.00th=[ 155], 00:12:35.984 | 99.00th=[ 296], 99.50th=[ 317], 99.90th=[ 368], 99.95th=[ 368], 00:12:35.984 | 99.99th=[ 368] 00:12:35.984 bw ( KiB/s): min= 512, max=25344, per=0.82%, avg=10238.45, stdev=6090.20, samples=20 00:12:35.984 iops : min= 4, max= 198, avg=79.95, stdev=47.55, samples=20 00:12:35.984 lat (usec) : 1000=0.07% 00:12:35.984 lat (msec) : 2=0.13%, 4=0.60%, 10=13.79%, 20=26.13%, 50=7.89% 00:12:35.984 lat (msec) : 100=37.20%, 250=13.33%, 500=0.86% 00:12:35.984 cpu : usr=0.61%, sys=0.25%, ctx=2584, majf=0, minf=3 00:12:35.984 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.984 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.984 issued rwts: total=708,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.984 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.984 job1: (groupid=0, jobs=1): err= 0: pid=81121: Tue Jul 23 05:03:36 2024 00:12:35.984 read: IOPS=78, BW=9.79MiB/s (10.3MB/s)(80.0MiB/8175msec) 00:12:35.984 slat (usec): min=6, max=1890, avg=60.62, stdev=127.08 00:12:35.984 clat (usec): min=4173, max=65422, avg=13129.22, stdev=8108.22 00:12:35.984 lat (usec): min=4191, max=65639, avg=13189.84, stdev=8114.95 00:12:35.984 clat percentiles (usec): 00:12:35.984 | 1.00th=[ 5407], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 8225], 00:12:35.984 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[11076], 60.00th=[11994], 00:12:35.984 | 70.00th=[13960], 80.00th=[17171], 90.00th=[21890], 95.00th=[24249], 00:12:35.984 | 99.00th=[63701], 99.50th=[64750], 99.90th=[65274], 99.95th=[65274], 00:12:35.984 | 99.99th=[65274] 00:12:35.984 write: IOPS=78, BW=9.82MiB/s (10.3MB/s)(88.1MiB/8970msec); 0 zone resets 00:12:35.984 slat (usec): min=31, max=6712, avg=171.57, stdev=366.74 00:12:35.984 clat (msec): min=34, max=362, avg=100.91, stdev=45.52 00:12:35.984 lat (msec): min=35, max=362, avg=101.08, stdev=45.52 00:12:35.984 clat percentiles (msec): 00:12:35.984 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 64], 00:12:35.984 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 89], 60.00th=[ 106], 00:12:35.984 | 70.00th=[ 115], 80.00th=[ 126], 90.00th=[ 155], 95.00th=[ 190], 00:12:35.984 | 99.00th=[ 275], 99.50th=[ 305], 99.90th=[ 363], 99.95th=[ 363], 00:12:35.984 | 99.99th=[ 363] 00:12:35.984 bw ( KiB/s): min= 1536, max=16128, per=0.72%, avg=8955.05, stdev=3796.87, samples=19 00:12:35.984 iops : min= 12, max= 126, avg=69.79, stdev=29.71, samples=19 00:12:35.984 lat (msec) : 10=20.07%, 20=21.26%, 50=5.72%, 100=29.89%, 250=22.16% 00:12:35.984 lat (msec) : 500=0.89% 00:12:35.984 cpu : usr=0.43%, sys=0.32%, ctx=2354, majf=0, minf=5 00:12:35.984 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.984 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.984 issued rwts: total=640,705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.984 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.984 job2: (groupid=0, jobs=1): err= 0: pid=81129: Tue Jul 23 05:03:36 2024 00:12:35.984 read: IOPS=74, BW=9496KiB/s (9724kB/s)(80.0MiB/8627msec) 00:12:35.984 slat (usec): min=5, max=3024, avg=87.68, stdev=219.45 00:12:35.984 clat (usec): min=7876, max=55521, avg=15628.07, stdev=7700.22 00:12:35.984 lat (usec): min=8249, max=55821, avg=15715.75, stdev=7700.28 00:12:35.984 clat percentiles (usec): 00:12:35.984 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10814], 00:12:35.984 | 30.00th=[11469], 40.00th=[12518], 50.00th=[13698], 60.00th=[14615], 00:12:35.984 | 70.00th=[15795], 80.00th=[17695], 90.00th=[23200], 95.00th=[32637], 00:12:35.984 | 99.00th=[52167], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:12:35.984 | 99.99th=[55313] 00:12:35.984 write: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8755msec); 0 zone resets 00:12:35.984 slat (usec): min=38, max=9390, avg=158.13, stdev=376.82 00:12:35.984 clat (msec): min=18, max=341, avg=86.60, stdev=40.78 00:12:35.984 lat (msec): min=18, max=341, avg=86.76, stdev=40.81 00:12:35.984 clat percentiles (msec): 00:12:35.984 | 1.00th=[ 20], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 60], 00:12:35.984 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 81], 00:12:35.984 | 70.00th=[ 94], 80.00th=[ 108], 90.00th=[ 132], 95.00th=[ 163], 00:12:35.984 | 99.00th=[ 234], 99.50th=[ 284], 99.90th=[ 342], 99.95th=[ 342], 00:12:35.984 | 99.99th=[ 342] 00:12:35.984 bw ( KiB/s): min= 1532, max=16160, per=0.82%, avg=10148.90, stdev=4938.88, samples=20 00:12:35.984 iops : min= 11, max= 126, avg=79.10, stdev=38.69, samples=20 00:12:35.984 lat (msec) : 10=3.96%, 20=34.93%, 50=7.01%, 100=39.79%, 250=13.75% 00:12:35.984 lat (msec) : 500=0.56% 00:12:35.984 cpu : usr=0.49%, sys=0.34%, ctx=2419, majf=0, minf=9 00:12:35.984 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.984 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.984 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.984 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.984 job3: (groupid=0, jobs=1): err= 0: pid=81143: Tue Jul 23 05:03:36 2024 00:12:35.984 read: IOPS=80, BW=10.1MiB/s (10.6MB/s)(80.0MiB/7906msec) 00:12:35.984 slat (usec): min=6, max=1501, avg=68.47, stdev=136.57 00:12:35.984 clat (msec): min=3, max=365, avg=26.06, stdev=49.32 00:12:35.984 lat (msec): min=3, max=365, avg=26.13, stdev=49.33 00:12:35.984 clat percentiles (msec): 00:12:35.984 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:12:35.984 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 15], 00:12:35.984 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 46], 95.00th=[ 75], 00:12:35.984 | 99.00th=[ 313], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 368], 00:12:35.984 | 99.99th=[ 368] 00:12:35.984 write: IOPS=81, BW=10.2MiB/s (10.7MB/s)(81.0MiB/7909msec); 0 zone resets 00:12:35.984 slat (usec): min=32, max=6418, avg=146.30, stdev=301.64 00:12:35.984 clat (msec): min=53, max=223, avg=96.96, stdev=33.80 00:12:35.984 lat (msec): min=53, max=223, avg=97.11, stdev=33.79 00:12:35.984 clat percentiles (msec): 00:12:35.984 | 1.00th=[ 56], 5.00th=[ 59], 10.00th=[ 62], 20.00th=[ 67], 00:12:35.984 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 88], 60.00th=[ 104], 00:12:35.985 | 70.00th=[ 112], 80.00th=[ 126], 90.00th=[ 142], 95.00th=[ 161], 00:12:35.985 | 99.00th=[ 205], 99.50th=[ 213], 99.90th=[ 224], 99.95th=[ 224], 00:12:35.985 | 99.99th=[ 224] 00:12:35.985 bw ( KiB/s): min= 1792, max=15104, per=0.72%, avg=8930.78, stdev=4080.26, samples=18 00:12:35.985 iops : min= 14, max= 118, avg=69.61, stdev=31.97, samples=18 00:12:35.985 lat (msec) : 4=0.08%, 10=13.28%, 20=25.54%, 50=6.21%, 100=31.52% 00:12:35.985 lat (msec) : 250=22.20%, 500=1.16% 00:12:35.985 cpu : usr=0.45%, sys=0.26%, ctx=2200, majf=0, minf=9 00:12:35.985 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.985 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.985 issued rwts: total=640,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.985 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.985 job4: (groupid=0, jobs=1): err= 0: pid=81163: Tue Jul 23 05:03:36 2024 00:12:35.985 read: IOPS=79, BW=9.91MiB/s (10.4MB/s)(80.0MiB/8069msec) 00:12:35.985 slat (usec): min=5, max=1206, avg=87.55, stdev=155.84 00:12:35.985 clat (msec): min=4, max=149, avg=21.83, stdev=23.50 00:12:35.985 lat (msec): min=4, max=149, avg=21.92, stdev=23.49 00:12:35.985 clat percentiles (msec): 00:12:35.985 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:12:35.985 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 17], 00:12:35.985 | 70.00th=[ 20], 80.00th=[ 26], 90.00th=[ 40], 95.00th=[ 81], 00:12:35.985 | 99.00th=[ 134], 99.50th=[ 140], 99.90th=[ 150], 99.95th=[ 150], 00:12:35.985 | 99.99th=[ 150] 00:12:35.985 write: IOPS=79, BW=9.99MiB/s (10.5MB/s)(82.6MiB/8269msec); 0 zone resets 00:12:35.985 slat (usec): min=39, max=3159, avg=156.12, stdev=224.90 00:12:35.985 clat (msec): min=37, max=335, avg=99.34, stdev=37.99 00:12:35.985 lat (msec): min=37, max=335, avg=99.50, stdev=37.99 00:12:35.985 clat percentiles (msec): 00:12:35.985 | 1.00th=[ 54], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 67], 00:12:35.985 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 93], 60.00th=[ 108], 00:12:35.985 | 70.00th=[ 114], 80.00th=[ 125], 90.00th=[ 142], 95.00th=[ 161], 00:12:35.985 | 99.00th=[ 226], 99.50th=[ 232], 99.90th=[ 334], 99.95th=[ 334], 00:12:35.985 | 99.99th=[ 334] 00:12:35.985 bw ( KiB/s): min= 2816, max=15390, per=0.73%, avg=9098.11, stdev=3571.74, samples=18 00:12:35.985 iops : min= 22, max= 120, avg=70.94, stdev=27.85, samples=18 00:12:35.985 lat (msec) : 10=13.37%, 20=22.14%, 50=10.38%, 100=29.36%, 250=24.52% 00:12:35.985 lat (msec) : 500=0.23% 00:12:35.985 cpu : usr=0.51%, sys=0.23%, ctx=2312, majf=0, minf=5 00:12:35.985 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.985 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.985 issued rwts: total=640,661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.985 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.985 job5: (groupid=0, jobs=1): err= 0: pid=81293: Tue Jul 23 05:03:36 2024 00:12:35.985 read: IOPS=74, BW=9552KiB/s (9781kB/s)(80.0MiB/8576msec) 00:12:35.985 slat (usec): min=5, max=1152, avg=67.39, stdev=126.95 00:12:35.985 clat (msec): min=4, max=108, avg=14.71, stdev=11.19 00:12:35.985 lat (msec): min=4, max=108, avg=14.77, stdev=11.19 00:12:35.985 clat percentiles (msec): 00:12:35.985 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:12:35.985 | 30.00th=[ 9], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 15], 00:12:35.985 | 70.00th=[ 16], 80.00th=[ 18], 90.00th=[ 24], 95.00th=[ 30], 00:12:35.985 | 99.00th=[ 79], 99.50th=[ 92], 99.90th=[ 109], 99.95th=[ 109], 00:12:35.985 | 99.99th=[ 109] 00:12:35.985 write: IOPS=85, BW=10.7MiB/s (11.2MB/s)(94.4MiB/8845msec); 0 zone resets 00:12:35.985 slat (usec): min=38, max=4057, avg=149.73, stdev=245.54 00:12:35.985 clat (msec): min=36, max=294, avg=93.02, stdev=36.25 00:12:35.985 lat (msec): min=36, max=294, avg=93.16, stdev=36.25 00:12:35.985 clat percentiles (msec): 00:12:35.985 | 1.00th=[ 43], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 64], 00:12:35.985 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 85], 60.00th=[ 94], 00:12:35.985 | 70.00th=[ 108], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 159], 00:12:35.985 | 99.00th=[ 222], 99.50th=[ 262], 99.90th=[ 296], 99.95th=[ 296], 00:12:35.985 | 99.99th=[ 296] 00:12:35.985 bw ( KiB/s): min= 1536, max=16384, per=0.77%, avg=9572.15, stdev=4368.21, samples=20 00:12:35.985 iops : min= 12, max= 128, avg=74.60, stdev=34.24, samples=20 00:12:35.985 lat (msec) : 10=15.70%, 20=22.65%, 50=7.67%, 100=34.19%, 250=19.43% 00:12:35.985 lat (msec) : 500=0.36% 00:12:35.985 cpu : usr=0.50%, sys=0.30%, ctx=2394, majf=0, minf=1 00:12:35.985 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.985 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.985 issued rwts: total=640,755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.985 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.985 job6: (groupid=0, jobs=1): err= 0: pid=81414: Tue Jul 23 05:03:36 2024 00:12:35.985 read: IOPS=88, BW=11.1MiB/s (11.6MB/s)(99.5MiB/8965msec) 00:12:35.985 slat (usec): min=6, max=1712, avg=71.19, stdev=160.69 00:12:35.985 clat (msec): min=3, max=114, avg=16.57, stdev=12.69 00:12:35.985 lat (msec): min=3, max=114, avg=16.65, stdev=12.69 00:12:35.985 clat percentiles (msec): 00:12:35.985 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:12:35.985 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 16], 00:12:35.985 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 26], 95.00th=[ 34], 00:12:35.985 | 99.00th=[ 93], 99.50th=[ 102], 99.90th=[ 115], 99.95th=[ 115], 00:12:35.985 | 99.99th=[ 115] 00:12:35.985 write: IOPS=95, BW=12.0MiB/s (12.6MB/s)(100MiB/8341msec); 0 zone resets 00:12:35.985 slat (usec): min=32, max=4998, avg=170.70, stdev=351.11 00:12:35.985 clat (usec): min=1021, max=302236, avg=82632.25, stdev=36665.08 00:12:35.985 lat (usec): min=1480, max=302292, avg=82802.96, stdev=36644.43 00:12:35.985 clat percentiles (msec): 00:12:35.985 | 1.00th=[ 6], 5.00th=[ 31], 10.00th=[ 57], 20.00th=[ 60], 00:12:35.985 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 79], 00:12:35.985 | 70.00th=[ 90], 80.00th=[ 109], 90.00th=[ 129], 95.00th=[ 155], 00:12:35.985 | 99.00th=[ 201], 99.50th=[ 211], 99.90th=[ 305], 99.95th=[ 305], 00:12:35.985 | 99.99th=[ 305] 00:12:35.985 bw ( KiB/s): min= 1792, max=23296, per=0.85%, avg=10600.05, stdev=5611.74, samples=19 00:12:35.985 iops : min= 14, max= 182, avg=82.63, stdev=43.92, samples=19 00:12:35.985 lat (msec) : 2=0.19%, 4=0.25%, 10=12.47%, 20=28.51%, 50=10.28% 00:12:35.985 lat (msec) : 100=35.65%, 250=12.59%, 500=0.06% 00:12:35.985 cpu : usr=0.60%, sys=0.29%, ctx=2666, majf=0, minf=5 00:12:35.985 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.985 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.985 issued rwts: total=796,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.985 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.985 job7: (groupid=0, jobs=1): err= 0: pid=81498: Tue Jul 23 05:03:36 2024 00:12:35.985 read: IOPS=74, BW=9565KiB/s (9794kB/s)(80.0MiB/8565msec) 00:12:35.985 slat (usec): min=7, max=1412, avg=78.73, stdev=161.72 00:12:35.985 clat (usec): min=6956, max=94243, avg=16648.22, stdev=12234.27 00:12:35.985 lat (usec): min=6984, max=94255, avg=16726.95, stdev=12241.55 00:12:35.985 clat percentiles (usec): 00:12:35.985 | 1.00th=[ 7177], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9634], 00:12:35.985 | 30.00th=[11338], 40.00th=[12911], 50.00th=[13960], 60.00th=[15008], 00:12:35.985 | 70.00th=[16581], 80.00th=[18744], 90.00th=[24249], 95.00th=[31851], 00:12:35.985 | 99.00th=[84411], 99.50th=[85459], 99.90th=[93848], 99.95th=[93848], 00:12:35.985 | 99.99th=[93848] 00:12:35.985 write: IOPS=89, BW=11.2MiB/s (11.7MB/s)(97.1MiB/8686msec); 0 zone resets 00:12:35.985 slat (usec): min=35, max=3990, avg=177.99, stdev=336.11 00:12:35.985 clat (msec): min=30, max=346, avg=88.68, stdev=39.22 00:12:35.985 lat (msec): min=30, max=346, avg=88.86, stdev=39.23 00:12:35.985 clat percentiles (msec): 00:12:35.985 | 1.00th=[ 36], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 61], 00:12:35.985 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 84], 00:12:35.985 | 70.00th=[ 95], 80.00th=[ 112], 90.00th=[ 132], 95.00th=[ 163], 00:12:35.985 | 99.00th=[ 257], 99.50th=[ 271], 99.90th=[ 347], 99.95th=[ 347], 00:12:35.985 | 99.99th=[ 347] 00:12:35.985 bw ( KiB/s): min= 2560, max=16640, per=0.83%, avg=10371.42, stdev=4287.28, samples=19 00:12:35.985 iops : min= 20, max= 130, avg=80.89, stdev=33.40, samples=19 00:12:35.985 lat (msec) : 10=9.60%, 20=27.88%, 50=6.99%, 100=40.65%, 250=14.26% 00:12:35.985 lat (msec) : 500=0.64% 00:12:35.985 cpu : usr=0.51%, sys=0.30%, ctx=2451, majf=0, minf=3 00:12:35.985 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.985 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.985 issued rwts: total=640,777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.985 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.985 job8: (groupid=0, jobs=1): err= 0: pid=81658: Tue Jul 23 05:03:36 2024 00:12:35.985 read: IOPS=76, BW=9761KiB/s (9995kB/s)(80.0MiB/8393msec) 00:12:35.985 slat (usec): min=6, max=1357, avg=68.94, stdev=140.75 00:12:35.985 clat (msec): min=5, max=110, avg=18.13, stdev=13.30 00:12:35.985 lat (msec): min=5, max=110, avg=18.20, stdev=13.29 00:12:35.985 clat percentiles (msec): 00:12:35.985 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:12:35.985 | 30.00th=[ 13], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:12:35.985 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 29], 95.00th=[ 37], 00:12:35.985 | 99.00th=[ 103], 99.50th=[ 107], 99.90th=[ 111], 99.95th=[ 111], 00:12:35.985 | 99.99th=[ 111] 00:12:35.985 write: IOPS=90, BW=11.4MiB/s (11.9MB/s)(96.1MiB/8467msec); 0 zone resets 00:12:35.985 slat (usec): min=37, max=3667, avg=149.17, stdev=243.79 00:12:35.985 clat (msec): min=40, max=276, avg=87.21, stdev=38.72 00:12:35.986 lat (msec): min=40, max=276, avg=87.35, stdev=38.73 00:12:35.986 clat percentiles (msec): 00:12:35.986 | 1.00th=[ 48], 5.00th=[ 56], 10.00th=[ 57], 20.00th=[ 59], 00:12:35.986 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 73], 60.00th=[ 81], 00:12:35.986 | 70.00th=[ 91], 80.00th=[ 112], 90.00th=[ 134], 95.00th=[ 163], 00:12:35.986 | 99.00th=[ 251], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 275], 00:12:35.986 | 99.99th=[ 275] 00:12:35.986 bw ( KiB/s): min= 1792, max=16640, per=0.83%, avg=10364.00, stdev=4668.26, samples=18 00:12:35.986 iops : min= 14, max= 130, avg=80.78, stdev=36.49, samples=18 00:12:35.986 lat (msec) : 10=5.46%, 20=30.09%, 50=9.37%, 100=40.24%, 250=14.27% 00:12:35.986 lat (msec) : 500=0.57% 00:12:35.986 cpu : usr=0.56%, sys=0.24%, ctx=2439, majf=0, minf=3 00:12:35.986 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.986 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.986 issued rwts: total=640,769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.986 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.986 job9: (groupid=0, jobs=1): err= 0: pid=81770: Tue Jul 23 05:03:36 2024 00:12:35.986 read: IOPS=73, BW=9468KiB/s (9696kB/s)(80.0MiB/8652msec) 00:12:35.986 slat (usec): min=6, max=5290, avg=80.30, stdev=270.74 00:12:35.986 clat (usec): min=6532, max=83210, avg=15741.55, stdev=9930.83 00:12:35.986 lat (usec): min=7134, max=83223, avg=15821.85, stdev=9921.84 00:12:35.986 clat percentiles (usec): 00:12:35.986 | 1.00th=[ 7701], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9896], 00:12:35.986 | 30.00th=[11338], 40.00th=[12780], 50.00th=[13829], 60.00th=[14877], 00:12:35.986 | 70.00th=[16057], 80.00th=[17957], 90.00th=[21890], 95.00th=[28705], 00:12:35.986 | 99.00th=[69731], 99.50th=[76022], 99.90th=[83362], 99.95th=[83362], 00:12:35.986 | 99.99th=[83362] 00:12:35.986 write: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8766msec); 0 zone resets 00:12:35.986 slat (usec): min=38, max=5017, avg=178.55, stdev=318.12 00:12:35.986 clat (msec): min=25, max=331, avg=86.82, stdev=39.32 00:12:35.986 lat (msec): min=25, max=331, avg=87.00, stdev=39.33 00:12:35.986 clat percentiles (msec): 00:12:35.986 | 1.00th=[ 28], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 62], 00:12:35.986 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 80], 00:12:35.986 | 70.00th=[ 91], 80.00th=[ 106], 90.00th=[ 133], 95.00th=[ 165], 00:12:35.986 | 99.00th=[ 259], 99.50th=[ 271], 99.90th=[ 334], 99.95th=[ 334], 00:12:35.986 | 99.99th=[ 334] 00:12:35.986 bw ( KiB/s): min= 1536, max=17920, per=0.82%, avg=10137.75, stdev=4884.40, samples=20 00:12:35.986 iops : min= 12, max= 140, avg=79.15, stdev=38.13, samples=20 00:12:35.986 lat (msec) : 10=9.31%, 20=28.89%, 50=6.32%, 100=42.92%, 250=11.94% 00:12:35.986 lat (msec) : 500=0.62% 00:12:35.986 cpu : usr=0.56%, sys=0.29%, ctx=2510, majf=0, minf=1 00:12:35.986 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.986 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.986 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.986 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.986 job10: (groupid=0, jobs=1): err= 0: pid=81771: Tue Jul 23 05:03:36 2024 00:12:35.986 read: IOPS=109, BW=13.7MiB/s (14.3MB/s)(120MiB/8784msec) 00:12:35.986 slat (usec): min=5, max=4383, avg=62.04, stdev=190.29 00:12:35.986 clat (msec): min=2, max=136, avg=11.82, stdev=13.85 00:12:35.986 lat (msec): min=2, max=136, avg=11.88, stdev=13.85 00:12:35.986 clat percentiles (msec): 00:12:35.986 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:12:35.986 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:12:35.986 | 70.00th=[ 11], 80.00th=[ 13], 90.00th=[ 18], 95.00th=[ 30], 00:12:35.986 | 99.00th=[ 90], 99.50th=[ 121], 99.90th=[ 138], 99.95th=[ 138], 00:12:35.986 | 99.99th=[ 138] 00:12:35.986 write: IOPS=124, BW=15.5MiB/s (16.3MB/s)(133MiB/8575msec); 0 zone resets 00:12:35.986 slat (usec): min=38, max=2703, avg=146.90, stdev=232.26 00:12:35.986 clat (msec): min=15, max=197, avg=63.78, stdev=22.30 00:12:35.986 lat (msec): min=15, max=197, avg=63.93, stdev=22.31 00:12:35.986 clat percentiles (msec): 00:12:35.986 | 1.00th=[ 37], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 46], 00:12:35.986 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 65], 00:12:35.986 | 70.00th=[ 70], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 102], 00:12:35.986 | 99.00th=[ 155], 99.50th=[ 178], 99.90th=[ 197], 99.95th=[ 199], 00:12:35.986 | 99.99th=[ 199] 00:12:35.986 bw ( KiB/s): min= 5376, max=19456, per=1.09%, avg=13552.05, stdev=4383.02, samples=20 00:12:35.986 iops : min= 42, max= 152, avg=105.75, stdev=34.17, samples=20 00:12:35.986 lat (msec) : 4=1.38%, 10=29.12%, 20=13.72%, 50=17.82%, 100=34.95% 00:12:35.986 lat (msec) : 250=3.01% 00:12:35.986 cpu : usr=0.63%, sys=0.50%, ctx=3390, majf=0, minf=5 00:12:35.986 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.986 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.986 issued rwts: total=960,1066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.986 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.986 job11: (groupid=0, jobs=1): err= 0: pid=81772: Tue Jul 23 05:03:36 2024 00:12:35.986 read: IOPS=108, BW=13.6MiB/s (14.3MB/s)(122MiB/8926msec) 00:12:35.986 slat (usec): min=6, max=1699, avg=55.48, stdev=108.59 00:12:35.986 clat (usec): min=3129, max=87788, avg=10398.34, stdev=9831.32 00:12:35.986 lat (usec): min=3320, max=87800, avg=10453.81, stdev=9831.40 00:12:35.986 clat percentiles (usec): 00:12:35.986 | 1.00th=[ 3818], 5.00th=[ 4686], 10.00th=[ 5604], 20.00th=[ 6194], 00:12:35.986 | 30.00th=[ 6718], 40.00th=[ 7111], 50.00th=[ 7701], 60.00th=[ 8455], 00:12:35.986 | 70.00th=[ 9765], 80.00th=[11600], 90.00th=[14353], 95.00th=[23987], 00:12:35.986 | 99.00th=[66847], 99.50th=[76022], 99.90th=[87557], 99.95th=[87557], 00:12:35.986 | 99.99th=[87557] 00:12:35.986 write: IOPS=128, BW=16.1MiB/s (16.8MB/s)(140MiB/8721msec); 0 zone resets 00:12:35.986 slat (usec): min=39, max=2416, avg=147.39, stdev=216.99 00:12:35.986 clat (msec): min=20, max=231, avg=61.68, stdev=24.20 00:12:35.986 lat (msec): min=20, max=231, avg=61.83, stdev=24.21 00:12:35.986 clat percentiles (msec): 00:12:35.986 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 44], 00:12:35.986 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 61], 00:12:35.986 | 70.00th=[ 66], 80.00th=[ 74], 90.00th=[ 86], 95.00th=[ 108], 00:12:35.986 | 99.00th=[ 167], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 232], 00:12:35.986 | 99.99th=[ 232] 00:12:35.986 bw ( KiB/s): min= 5632, max=24320, per=1.15%, avg=14332.89, stdev=5346.16, samples=19 00:12:35.986 iops : min= 44, max= 190, avg=111.89, stdev=41.77, samples=19 00:12:35.986 lat (msec) : 4=0.81%, 10=32.12%, 20=10.33%, 50=21.18%, 100=32.17% 00:12:35.986 lat (msec) : 250=3.39% 00:12:35.986 cpu : usr=0.78%, sys=0.40%, ctx=3478, majf=0, minf=5 00:12:35.986 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.986 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.986 issued rwts: total=972,1120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.986 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.986 job12: (groupid=0, jobs=1): err= 0: pid=81773: Tue Jul 23 05:03:36 2024 00:12:35.986 read: IOPS=110, BW=13.8MiB/s (14.5MB/s)(120MiB/8690msec) 00:12:35.986 slat (usec): min=5, max=1788, avg=61.12, stdev=134.12 00:12:35.986 clat (msec): min=2, max=174, avg=12.66, stdev=19.73 00:12:35.986 lat (msec): min=2, max=174, avg=12.72, stdev=19.72 00:12:35.986 clat percentiles (msec): 00:12:35.986 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 5], 00:12:35.986 | 30.00th=[ 5], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 8], 00:12:35.986 | 70.00th=[ 10], 80.00th=[ 15], 90.00th=[ 24], 95.00th=[ 40], 00:12:35.986 | 99.00th=[ 109], 99.50th=[ 142], 99.90th=[ 176], 99.95th=[ 176], 00:12:35.986 | 99.99th=[ 176] 00:12:35.986 write: IOPS=120, BW=15.1MiB/s (15.8MB/s)(128MiB/8474msec); 0 zone resets 00:12:35.986 slat (usec): min=33, max=5026, avg=154.65, stdev=241.50 00:12:35.986 clat (msec): min=16, max=167, avg=65.94, stdev=19.68 00:12:35.986 lat (msec): min=16, max=167, avg=66.09, stdev=19.68 00:12:35.986 clat percentiles (msec): 00:12:35.986 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 50], 00:12:35.986 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 68], 00:12:35.986 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 101], 00:12:35.986 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 150], 99.95th=[ 167], 00:12:35.986 | 99.99th=[ 167] 00:12:35.986 bw ( KiB/s): min= 3065, max=17920, per=1.04%, avg=12931.47, stdev=3689.24, samples=19 00:12:35.986 iops : min= 23, max= 140, avg=100.89, stdev=28.96, samples=19 00:12:35.986 lat (msec) : 4=7.62%, 10=27.26%, 20=7.93%, 50=15.95%, 100=37.56% 00:12:35.986 lat (msec) : 250=3.69% 00:12:35.986 cpu : usr=0.76%, sys=0.33%, ctx=3355, majf=0, minf=1 00:12:35.986 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.986 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.986 issued rwts: total=960,1021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.986 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.986 job13: (groupid=0, jobs=1): err= 0: pid=81774: Tue Jul 23 05:03:36 2024 00:12:35.986 read: IOPS=114, BW=14.3MiB/s (15.0MB/s)(120MiB/8393msec) 00:12:35.986 slat (usec): min=7, max=1956, avg=54.05, stdev=125.90 00:12:35.986 clat (usec): min=1904, max=66246, avg=9086.43, stdev=9285.04 00:12:35.986 lat (usec): min=2876, max=66254, avg=9140.48, stdev=9288.10 00:12:35.986 clat percentiles (usec): 00:12:35.986 | 1.00th=[ 3130], 5.00th=[ 3687], 10.00th=[ 3851], 20.00th=[ 4621], 00:12:35.986 | 30.00th=[ 5014], 40.00th=[ 5407], 50.00th=[ 5997], 60.00th=[ 7111], 00:12:35.986 | 70.00th=[ 8160], 80.00th=[10159], 90.00th=[14746], 95.00th=[31851], 00:12:35.986 | 99.00th=[51643], 99.50th=[57410], 99.90th=[66323], 99.95th=[66323], 00:12:35.986 | 99.99th=[66323] 00:12:35.986 write: IOPS=113, BW=14.2MiB/s (14.9MB/s)(126MiB/8904msec); 0 zone resets 00:12:35.987 slat (usec): min=30, max=4043, avg=149.63, stdev=284.21 00:12:35.987 clat (msec): min=15, max=312, avg=70.08, stdev=28.28 00:12:35.987 lat (msec): min=15, max=312, avg=70.23, stdev=28.28 00:12:35.987 clat percentiles (msec): 00:12:35.987 | 1.00th=[ 40], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 51], 00:12:35.987 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 65], 60.00th=[ 71], 00:12:35.987 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 108], 00:12:35.987 | 99.00th=[ 199], 99.50th=[ 236], 99.90th=[ 288], 99.95th=[ 313], 00:12:35.987 | 99.99th=[ 313] 00:12:35.987 bw ( KiB/s): min= 4352, max=18432, per=1.04%, avg=12931.53, stdev=4166.07, samples=19 00:12:35.987 iops : min= 34, max= 144, avg=100.89, stdev=32.51, samples=19 00:12:35.987 lat (msec) : 2=0.05%, 4=5.69%, 10=32.94%, 20=6.90%, 50=12.99% 00:12:35.987 lat (msec) : 100=37.56%, 250=3.71%, 500=0.15% 00:12:35.987 cpu : usr=0.74%, sys=0.36%, ctx=3125, majf=0, minf=5 00:12:35.987 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.987 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.987 issued rwts: total=960,1010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.987 job14: (groupid=0, jobs=1): err= 0: pid=81775: Tue Jul 23 05:03:36 2024 00:12:35.987 read: IOPS=119, BW=15.0MiB/s (15.7MB/s)(140MiB/9334msec) 00:12:35.987 slat (usec): min=5, max=1678, avg=52.60, stdev=112.14 00:12:35.987 clat (msec): min=2, max=125, avg= 8.83, stdev= 9.45 00:12:35.987 lat (msec): min=2, max=125, avg= 8.88, stdev= 9.45 00:12:35.987 clat percentiles (msec): 00:12:35.987 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:12:35.987 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 8], 00:12:35.987 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 13], 95.00th=[ 16], 00:12:35.987 | 99.00th=[ 53], 99.50th=[ 80], 99.90th=[ 120], 99.95th=[ 126], 00:12:35.987 | 99.99th=[ 126] 00:12:35.987 write: IOPS=130, BW=16.3MiB/s (17.1MB/s)(144MiB/8825msec); 0 zone resets 00:12:35.987 slat (usec): min=33, max=5379, avg=160.36, stdev=310.09 00:12:35.987 clat (msec): min=3, max=184, avg=60.78, stdev=24.43 00:12:35.987 lat (msec): min=3, max=184, avg=60.94, stdev=24.42 00:12:35.987 clat percentiles (msec): 00:12:35.987 | 1.00th=[ 7], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 44], 00:12:35.987 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 61], 00:12:35.987 | 70.00th=[ 68], 80.00th=[ 78], 90.00th=[ 90], 95.00th=[ 108], 00:12:35.987 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 174], 99.95th=[ 186], 00:12:35.987 | 99.99th=[ 186] 00:12:35.987 bw ( KiB/s): min= 7424, max=30658, per=1.18%, avg=14625.75, stdev=5735.69, samples=20 00:12:35.987 iops : min= 58, max= 239, avg=114.20, stdev=44.73, samples=20 00:12:35.987 lat (msec) : 4=1.06%, 10=40.75%, 20=7.71%, 50=17.84%, 100=29.30% 00:12:35.987 lat (msec) : 250=3.35% 00:12:35.987 cpu : usr=0.79%, sys=0.47%, ctx=3680, majf=0, minf=3 00:12:35.987 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.987 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.987 issued rwts: total=1120,1150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.987 job15: (groupid=0, jobs=1): err= 0: pid=81782: Tue Jul 23 05:03:36 2024 00:12:35.987 read: IOPS=121, BW=15.2MiB/s (16.0MB/s)(140MiB/9191msec) 00:12:35.987 slat (usec): min=7, max=1668, avg=62.72, stdev=146.13 00:12:35.987 clat (usec): min=2342, max=89502, avg=8575.94, stdev=7283.15 00:12:35.987 lat (usec): min=2872, max=89517, avg=8638.66, stdev=7282.52 00:12:35.987 clat percentiles (usec): 00:12:35.987 | 1.00th=[ 3884], 5.00th=[ 4490], 10.00th=[ 4686], 20.00th=[ 5211], 00:12:35.987 | 30.00th=[ 5735], 40.00th=[ 6259], 50.00th=[ 6783], 60.00th=[ 7570], 00:12:35.987 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[12649], 95.00th=[17957], 00:12:35.987 | 99.00th=[32900], 99.50th=[70779], 99.90th=[89654], 99.95th=[89654], 00:12:35.987 | 99.99th=[89654] 00:12:35.987 write: IOPS=127, BW=16.0MiB/s (16.7MB/s)(142MiB/8869msec); 0 zone resets 00:12:35.987 slat (usec): min=38, max=17938, avg=158.37, stdev=576.60 00:12:35.987 clat (msec): min=8, max=296, avg=62.04, stdev=27.55 00:12:35.987 lat (msec): min=8, max=296, avg=62.19, stdev=27.55 00:12:35.987 clat percentiles (msec): 00:12:35.987 | 1.00th=[ 19], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 44], 00:12:35.987 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 61], 00:12:35.987 | 70.00th=[ 67], 80.00th=[ 77], 90.00th=[ 91], 95.00th=[ 106], 00:12:35.987 | 99.00th=[ 157], 99.50th=[ 205], 99.90th=[ 296], 99.95th=[ 296], 00:12:35.987 | 99.99th=[ 296] 00:12:35.987 bw ( KiB/s): min= 5888, max=25344, per=1.16%, avg=14398.45, stdev=5006.61, samples=20 00:12:35.987 iops : min= 46, max= 198, avg=112.45, stdev=39.11, samples=20 00:12:35.987 lat (msec) : 4=0.58%, 10=40.85%, 20=6.84%, 50=20.74%, 100=27.84% 00:12:35.987 lat (msec) : 250=3.02%, 500=0.13% 00:12:35.987 cpu : usr=0.82%, sys=0.44%, ctx=3610, majf=0, minf=3 00:12:35.987 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.987 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.987 issued rwts: total=1120,1132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.987 job16: (groupid=0, jobs=1): err= 0: pid=81783: Tue Jul 23 05:03:36 2024 00:12:35.987 read: IOPS=106, BW=13.3MiB/s (13.9MB/s)(120MiB/9046msec) 00:12:35.987 slat (usec): min=6, max=1662, avg=70.02, stdev=140.08 00:12:35.987 clat (msec): min=3, max=169, avg=12.64, stdev=17.07 00:12:35.987 lat (msec): min=3, max=169, avg=12.71, stdev=17.07 00:12:35.987 clat percentiles (msec): 00:12:35.987 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:12:35.987 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:12:35.987 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 20], 95.00th=[ 27], 00:12:35.987 | 99.00th=[ 107], 99.50th=[ 159], 99.90th=[ 169], 99.95th=[ 169], 00:12:35.987 | 99.99th=[ 169] 00:12:35.987 write: IOPS=124, BW=15.6MiB/s (16.3MB/s)(133MiB/8518msec); 0 zone resets 00:12:35.987 slat (usec): min=31, max=3778, avg=140.53, stdev=240.80 00:12:35.987 clat (msec): min=11, max=342, avg=63.60, stdev=31.53 00:12:35.987 lat (msec): min=11, max=342, avg=63.74, stdev=31.52 00:12:35.987 clat percentiles (msec): 00:12:35.987 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 44], 00:12:35.987 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 61], 00:12:35.987 | 70.00th=[ 67], 80.00th=[ 75], 90.00th=[ 89], 95.00th=[ 121], 00:12:35.987 | 99.00th=[ 186], 99.50th=[ 247], 99.90th=[ 342], 99.95th=[ 342], 00:12:35.987 | 99.99th=[ 342] 00:12:35.987 bw ( KiB/s): min= 1021, max=23552, per=1.09%, avg=13501.05, stdev=5553.41, samples=20 00:12:35.987 iops : min= 7, max= 184, avg=105.25, stdev=43.55, samples=20 00:12:35.987 lat (msec) : 4=0.74%, 10=28.49%, 20=14.09%, 50=22.35%, 100=29.57% 00:12:35.987 lat (msec) : 250=4.50%, 500=0.25% 00:12:35.987 cpu : usr=0.77%, sys=0.35%, ctx=3399, majf=0, minf=5 00:12:35.987 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.987 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.987 issued rwts: total=960,1062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.987 job17: (groupid=0, jobs=1): err= 0: pid=81784: Tue Jul 23 05:03:36 2024 00:12:35.987 read: IOPS=104, BW=13.1MiB/s (13.8MB/s)(120MiB/9147msec) 00:12:35.987 slat (usec): min=5, max=1313, avg=63.00, stdev=126.82 00:12:35.987 clat (msec): min=2, max=244, avg=12.83, stdev=21.89 00:12:35.987 lat (msec): min=2, max=244, avg=12.89, stdev=21.89 00:12:35.987 clat percentiles (msec): 00:12:35.987 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 6], 20.00th=[ 7], 00:12:35.987 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:12:35.987 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 18], 95.00th=[ 26], 00:12:35.987 | 99.00th=[ 114], 99.50th=[ 215], 99.90th=[ 245], 99.95th=[ 245], 00:12:35.987 | 99.99th=[ 245] 00:12:35.987 write: IOPS=127, BW=16.0MiB/s (16.8MB/s)(136MiB/8501msec); 0 zone resets 00:12:35.987 slat (usec): min=35, max=3305, avg=142.07, stdev=220.83 00:12:35.987 clat (msec): min=19, max=332, avg=62.02, stdev=29.84 00:12:35.987 lat (msec): min=19, max=332, avg=62.17, stdev=29.83 00:12:35.987 clat percentiles (msec): 00:12:35.987 | 1.00th=[ 23], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 44], 00:12:35.987 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 60], 00:12:35.987 | 70.00th=[ 65], 80.00th=[ 75], 90.00th=[ 91], 95.00th=[ 111], 00:12:35.987 | 99.00th=[ 174], 99.50th=[ 264], 99.90th=[ 326], 99.95th=[ 334], 00:12:35.987 | 99.99th=[ 334] 00:12:35.987 bw ( KiB/s): min= 1792, max=24576, per=1.11%, avg=13813.95, stdev=5511.13, samples=20 00:12:35.987 iops : min= 14, max= 192, avg=107.75, stdev=43.09, samples=20 00:12:35.987 lat (msec) : 4=2.64%, 10=26.72%, 20=14.36%, 50=21.93%, 100=29.95% 00:12:35.987 lat (msec) : 250=4.05%, 500=0.34% 00:12:35.987 cpu : usr=0.69%, sys=0.44%, ctx=3358, majf=0, minf=3 00:12:35.987 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.987 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.987 issued rwts: total=960,1087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.987 job18: (groupid=0, jobs=1): err= 0: pid=81785: Tue Jul 23 05:03:36 2024 00:12:35.987 read: IOPS=107, BW=13.4MiB/s (14.1MB/s)(120MiB/8955msec) 00:12:35.987 slat (usec): min=6, max=1430, avg=60.71, stdev=126.53 00:12:35.987 clat (msec): min=2, max=101, avg=12.84, stdev=12.69 00:12:35.987 lat (msec): min=3, max=101, avg=12.90, stdev=12.70 00:12:35.987 clat percentiles (msec): 00:12:35.987 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 7], 00:12:35.987 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 11], 00:12:35.987 | 70.00th=[ 12], 80.00th=[ 16], 90.00th=[ 25], 95.00th=[ 35], 00:12:35.988 | 99.00th=[ 83], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 102], 00:12:35.988 | 99.99th=[ 102] 00:12:35.988 write: IOPS=121, BW=15.1MiB/s (15.9MB/s)(128MiB/8449msec); 0 zone resets 00:12:35.988 slat (usec): min=30, max=5185, avg=145.79, stdev=282.20 00:12:35.988 clat (msec): min=29, max=202, avg=65.38, stdev=23.71 00:12:35.988 lat (msec): min=29, max=202, avg=65.52, stdev=23.71 00:12:35.988 clat percentiles (msec): 00:12:35.988 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 47], 00:12:35.988 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 66], 00:12:35.988 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 109], 00:12:35.988 | 99.00th=[ 157], 99.50th=[ 178], 99.90th=[ 186], 99.95th=[ 203], 00:12:35.988 | 99.99th=[ 203] 00:12:35.988 bw ( KiB/s): min= 1792, max=22016, per=1.05%, avg=13016.65, stdev=5274.83, samples=20 00:12:35.988 iops : min= 14, max= 172, avg=101.65, stdev=41.24, samples=20 00:12:35.988 lat (msec) : 4=1.97%, 10=24.50%, 20=15.47%, 50=20.21%, 100=34.07% 00:12:35.988 lat (msec) : 250=3.78% 00:12:35.988 cpu : usr=0.73%, sys=0.38%, ctx=3313, majf=0, minf=3 00:12:35.988 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.988 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.988 issued rwts: total=960,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.988 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.988 job19: (groupid=0, jobs=1): err= 0: pid=81786: Tue Jul 23 05:03:36 2024 00:12:35.988 read: IOPS=120, BW=15.0MiB/s (15.7MB/s)(140MiB/9324msec) 00:12:35.988 slat (usec): min=6, max=2836, avg=64.10, stdev=191.96 00:12:35.988 clat (msec): min=2, max=103, avg= 9.33, stdev=10.15 00:12:35.988 lat (msec): min=2, max=104, avg= 9.40, stdev=10.18 00:12:35.988 clat percentiles (msec): 00:12:35.988 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 5], 00:12:35.988 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 8], 00:12:35.988 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 15], 95.00th=[ 20], 00:12:35.988 | 99.00th=[ 71], 99.50th=[ 94], 99.90th=[ 102], 99.95th=[ 104], 00:12:35.988 | 99.99th=[ 104] 00:12:35.988 write: IOPS=129, BW=16.2MiB/s (16.9MB/s)(141MiB/8750msec); 0 zone resets 00:12:35.988 slat (usec): min=30, max=5198, avg=157.78, stdev=324.15 00:12:35.988 clat (msec): min=3, max=189, avg=61.32, stdev=24.81 00:12:35.988 lat (msec): min=4, max=189, avg=61.48, stdev=24.80 00:12:35.988 clat percentiles (msec): 00:12:35.988 | 1.00th=[ 8], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 44], 00:12:35.988 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 62], 00:12:35.988 | 70.00th=[ 70], 80.00th=[ 80], 90.00th=[ 91], 95.00th=[ 106], 00:12:35.988 | 99.00th=[ 153], 99.50th=[ 171], 99.90th=[ 176], 99.95th=[ 190], 00:12:35.988 | 99.99th=[ 190] 00:12:35.988 bw ( KiB/s): min= 4343, max=28160, per=1.16%, avg=14370.75, stdev=5878.92, samples=20 00:12:35.988 iops : min= 33, max= 220, avg=112.15, stdev=46.00, samples=20 00:12:35.988 lat (msec) : 4=4.35%, 10=34.56%, 20=10.13%, 50=18.92%, 100=28.70% 00:12:35.988 lat (msec) : 250=3.33% 00:12:35.988 cpu : usr=0.76%, sys=0.47%, ctx=3645, majf=0, minf=1 00:12:35.988 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.988 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.988 issued rwts: total=1120,1131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.988 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.988 job20: (groupid=0, jobs=1): err= 0: pid=81787: Tue Jul 23 05:03:36 2024 00:12:35.988 read: IOPS=121, BW=15.1MiB/s (15.9MB/s)(140MiB/9251msec) 00:12:35.988 slat (usec): min=6, max=2536, avg=72.09, stdev=163.27 00:12:35.988 clat (usec): min=2776, max=51303, avg=10100.74, stdev=5787.65 00:12:35.988 lat (usec): min=3711, max=51683, avg=10172.83, stdev=5792.71 00:12:35.988 clat percentiles (usec): 00:12:35.988 | 1.00th=[ 4015], 5.00th=[ 4817], 10.00th=[ 5538], 20.00th=[ 6194], 00:12:35.988 | 30.00th=[ 6980], 40.00th=[ 7767], 50.00th=[ 8586], 60.00th=[ 9372], 00:12:35.988 | 70.00th=[10683], 80.00th=[13042], 90.00th=[16188], 95.00th=[19006], 00:12:35.988 | 99.00th=[34866], 99.50th=[48497], 99.90th=[50070], 99.95th=[51119], 00:12:35.988 | 99.99th=[51119] 00:12:35.988 write: IOPS=132, BW=16.5MiB/s (17.3MB/s)(142MiB/8605msec); 0 zone resets 00:12:35.988 slat (usec): min=36, max=3138, avg=137.68, stdev=227.02 00:12:35.988 clat (msec): min=10, max=219, avg=59.93, stdev=26.29 00:12:35.988 lat (msec): min=10, max=220, avg=60.07, stdev=26.29 00:12:35.988 clat percentiles (msec): 00:12:35.988 | 1.00th=[ 15], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 42], 00:12:35.988 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 52], 60.00th=[ 58], 00:12:35.988 | 70.00th=[ 64], 80.00th=[ 73], 90.00th=[ 92], 95.00th=[ 115], 00:12:35.988 | 99.00th=[ 161], 99.50th=[ 174], 99.90th=[ 207], 99.95th=[ 220], 00:12:35.988 | 99.99th=[ 220] 00:12:35.988 bw ( KiB/s): min= 4864, max=27648, per=1.16%, avg=14460.10, stdev=6282.55, samples=20 00:12:35.988 iops : min= 38, max= 216, avg=112.90, stdev=49.03, samples=20 00:12:35.988 lat (msec) : 4=0.40%, 10=31.95%, 20=15.86%, 50=24.32%, 100=23.57% 00:12:35.988 lat (msec) : 250=3.90% 00:12:35.988 cpu : usr=0.80%, sys=0.45%, ctx=3759, majf=0, minf=9 00:12:35.988 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.988 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.988 issued rwts: total=1120,1137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.988 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.988 job21: (groupid=0, jobs=1): err= 0: pid=81788: Tue Jul 23 05:03:36 2024 00:12:35.988 read: IOPS=106, BW=13.3MiB/s (14.0MB/s)(120MiB/9006msec) 00:12:35.988 slat (usec): min=7, max=1601, avg=65.81, stdev=152.12 00:12:35.988 clat (msec): min=2, max=123, avg=13.27, stdev=14.68 00:12:35.988 lat (msec): min=2, max=123, avg=13.33, stdev=14.67 00:12:35.988 clat percentiles (msec): 00:12:35.988 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], 00:12:35.988 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 11], 00:12:35.988 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 24], 95.00th=[ 36], 00:12:35.988 | 99.00th=[ 77], 99.50th=[ 115], 99.90th=[ 124], 99.95th=[ 124], 00:12:35.988 | 99.99th=[ 124] 00:12:35.988 write: IOPS=123, BW=15.4MiB/s (16.2MB/s)(130MiB/8416msec); 0 zone resets 00:12:35.988 slat (usec): min=34, max=2554, avg=135.96, stdev=192.85 00:12:35.988 clat (msec): min=28, max=187, avg=64.40, stdev=22.83 00:12:35.988 lat (msec): min=28, max=187, avg=64.53, stdev=22.82 00:12:35.988 clat percentiles (msec): 00:12:35.988 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 46], 00:12:35.988 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 67], 00:12:35.988 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 89], 95.00th=[ 104], 00:12:35.988 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 188], 00:12:35.988 | 99.99th=[ 188] 00:12:35.988 bw ( KiB/s): min= 2560, max=22738, per=1.06%, avg=13191.15, stdev=5233.74, samples=20 00:12:35.988 iops : min= 20, max= 177, avg=102.90, stdev=40.77, samples=20 00:12:35.988 lat (msec) : 4=2.50%, 10=25.73%, 20=13.11%, 50=19.47%, 100=35.94% 00:12:35.988 lat (msec) : 250=3.25% 00:12:35.988 cpu : usr=0.74%, sys=0.37%, ctx=3353, majf=0, minf=3 00:12:35.988 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.988 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.988 issued rwts: total=960,1038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.988 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.988 job22: (groupid=0, jobs=1): err= 0: pid=81789: Tue Jul 23 05:03:36 2024 00:12:35.988 read: IOPS=102, BW=12.9MiB/s (13.5MB/s)(121MiB/9404msec) 00:12:35.988 slat (usec): min=5, max=4178, avg=71.21, stdev=202.89 00:12:35.988 clat (usec): min=1560, max=106731, avg=13994.82, stdev=15683.30 00:12:35.988 lat (msec): min=2, max=106, avg=14.07, stdev=15.71 00:12:35.988 clat percentiles (msec): 00:12:35.989 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:12:35.989 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 10], 00:12:35.989 | 70.00th=[ 13], 80.00th=[ 19], 90.00th=[ 30], 95.00th=[ 52], 00:12:35.989 | 99.00th=[ 75], 99.50th=[ 101], 99.90th=[ 107], 99.95th=[ 107], 00:12:35.989 | 99.99th=[ 107] 00:12:35.989 write: IOPS=134, BW=16.9MiB/s (17.7MB/s)(140MiB/8308msec); 0 zone resets 00:12:35.989 slat (usec): min=30, max=6923, avg=151.66, stdev=366.22 00:12:35.989 clat (usec): min=1206, max=183539, avg=58770.34, stdev=25914.87 00:12:35.989 lat (usec): min=1278, max=183604, avg=58922.00, stdev=25906.71 00:12:35.989 clat percentiles (msec): 00:12:35.989 | 1.00th=[ 4], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 41], 00:12:35.989 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 58], 00:12:35.989 | 70.00th=[ 65], 80.00th=[ 77], 90.00th=[ 93], 95.00th=[ 110], 00:12:35.989 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 184], 00:12:35.989 | 99.99th=[ 184] 00:12:35.989 bw ( KiB/s): min= 6400, max=34422, per=1.15%, avg=14324.00, stdev=6877.30, samples=20 00:12:35.989 iops : min= 50, max= 268, avg=111.70, stdev=53.64, samples=20 00:12:35.989 lat (msec) : 2=0.43%, 4=1.10%, 10=28.59%, 20=10.25%, 50=27.44% 00:12:35.989 lat (msec) : 100=27.97%, 250=4.21% 00:12:35.989 cpu : usr=0.78%, sys=0.37%, ctx=3379, majf=0, minf=5 00:12:35.989 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.989 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.989 issued rwts: total=968,1120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.989 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.989 job23: (groupid=0, jobs=1): err= 0: pid=81790: Tue Jul 23 05:03:36 2024 00:12:35.989 read: IOPS=117, BW=14.6MiB/s (15.3MB/s)(140MiB/9567msec) 00:12:35.989 slat (usec): min=7, max=3463, avg=66.45, stdev=185.17 00:12:35.989 clat (msec): min=2, max=130, avg=10.00, stdev=11.66 00:12:35.989 lat (msec): min=2, max=130, avg=10.07, stdev=11.66 00:12:35.989 clat percentiles (msec): 00:12:35.989 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], 00:12:35.989 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 9], 00:12:35.989 | 70.00th=[ 10], 80.00th=[ 12], 90.00th=[ 15], 95.00th=[ 24], 00:12:35.989 | 99.00th=[ 52], 99.50th=[ 124], 99.90th=[ 126], 99.95th=[ 131], 00:12:35.989 | 99.99th=[ 131] 00:12:35.989 write: IOPS=134, BW=16.9MiB/s (17.7MB/s)(146MiB/8638msec); 0 zone resets 00:12:35.989 slat (usec): min=30, max=24668, avg=167.78, stdev=763.08 00:12:35.989 clat (usec): min=918, max=175838, avg=58826.40, stdev=27212.38 00:12:35.989 lat (usec): min=1009, max=175903, avg=58994.18, stdev=27192.54 00:12:35.989 clat percentiles (msec): 00:12:35.989 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 39], 20.00th=[ 42], 00:12:35.989 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 60], 00:12:35.989 | 70.00th=[ 67], 80.00th=[ 74], 90.00th=[ 93], 95.00th=[ 109], 00:12:35.989 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 167], 99.95th=[ 176], 00:12:35.989 | 99.99th=[ 176] 00:12:35.989 bw ( KiB/s): min= 4864, max=41042, per=1.19%, avg=14824.95, stdev=8180.85, samples=20 00:12:35.989 iops : min= 38, max= 320, avg=115.75, stdev=63.80, samples=20 00:12:35.989 lat (usec) : 1000=0.04% 00:12:35.989 lat (msec) : 2=0.18%, 4=4.51%, 10=32.65%, 20=11.82%, 50=19.43% 00:12:35.989 lat (msec) : 100=27.26%, 250=4.11% 00:12:35.989 cpu : usr=0.81%, sys=0.45%, ctx=3747, majf=0, minf=3 00:12:35.989 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.989 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.989 issued rwts: total=1120,1165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.989 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.989 job24: (groupid=0, jobs=1): err= 0: pid=81791: Tue Jul 23 05:03:36 2024 00:12:35.989 read: IOPS=119, BW=15.0MiB/s (15.7MB/s)(136MiB/9098msec) 00:12:35.989 slat (usec): min=6, max=4086, avg=73.83, stdev=215.05 00:12:35.989 clat (usec): min=3194, max=64993, avg=8652.35, stdev=6768.73 00:12:35.989 lat (usec): min=3336, max=65073, avg=8726.17, stdev=6773.54 00:12:35.989 clat percentiles (usec): 00:12:35.989 | 1.00th=[ 3818], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 4948], 00:12:35.989 | 30.00th=[ 5604], 40.00th=[ 6390], 50.00th=[ 7177], 60.00th=[ 8029], 00:12:35.989 | 70.00th=[ 8848], 80.00th=[10159], 90.00th=[12649], 95.00th=[16581], 00:12:35.989 | 99.00th=[46400], 99.50th=[51643], 99.90th=[64750], 99.95th=[64750], 00:12:35.989 | 99.99th=[64750] 00:12:35.989 write: IOPS=127, BW=15.9MiB/s (16.7MB/s)(140MiB/8801msec); 0 zone resets 00:12:35.989 slat (usec): min=30, max=11516, avg=158.31, stdev=409.86 00:12:35.989 clat (msec): min=18, max=251, avg=62.21, stdev=29.01 00:12:35.989 lat (msec): min=20, max=251, avg=62.37, stdev=29.00 00:12:35.989 clat percentiles (msec): 00:12:35.989 | 1.00th=[ 26], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 43], 00:12:35.989 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 59], 00:12:35.989 | 70.00th=[ 66], 80.00th=[ 75], 90.00th=[ 96], 95.00th=[ 122], 00:12:35.989 | 99.00th=[ 174], 99.50th=[ 213], 99.90th=[ 247], 99.95th=[ 251], 00:12:35.989 | 99.99th=[ 251] 00:12:35.989 bw ( KiB/s): min= 4608, max=25344, per=1.17%, avg=14549.53, stdev=5665.10, samples=19 00:12:35.989 iops : min= 36, max= 198, avg=113.63, stdev=44.23, samples=19 00:12:35.989 lat (msec) : 4=1.00%, 10=38.17%, 20=8.46%, 50=22.80%, 100=24.88% 00:12:35.989 lat (msec) : 250=4.66%, 500=0.05% 00:12:35.989 cpu : usr=0.75%, sys=0.47%, ctx=3660, majf=0, minf=3 00:12:35.989 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.989 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.989 issued rwts: total=1091,1120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.989 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.989 job25: (groupid=0, jobs=1): err= 0: pid=81792: Tue Jul 23 05:03:36 2024 00:12:35.989 read: IOPS=113, BW=14.2MiB/s (14.9MB/s)(120MiB/8450msec) 00:12:35.989 slat (usec): min=6, max=1279, avg=61.79, stdev=121.21 00:12:35.989 clat (msec): min=2, max=122, avg=12.99, stdev=13.51 00:12:35.989 lat (msec): min=2, max=122, avg=13.06, stdev=13.51 00:12:35.989 clat percentiles (msec): 00:12:35.989 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:12:35.989 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 11], 00:12:35.989 | 70.00th=[ 12], 80.00th=[ 16], 90.00th=[ 26], 95.00th=[ 37], 00:12:35.989 | 99.00th=[ 71], 99.50th=[ 112], 99.90th=[ 123], 99.95th=[ 123], 00:12:35.989 | 99.99th=[ 123] 00:12:35.989 write: IOPS=118, BW=14.8MiB/s (15.6MB/s)(125MiB/8437msec); 0 zone resets 00:12:35.989 slat (usec): min=36, max=3572, avg=154.47, stdev=263.08 00:12:35.989 clat (msec): min=20, max=351, avg=66.96, stdev=28.27 00:12:35.989 lat (msec): min=20, max=351, avg=67.11, stdev=28.27 00:12:35.989 clat percentiles (msec): 00:12:35.989 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 47], 00:12:35.989 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 66], 00:12:35.989 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 100], 95.00th=[ 114], 00:12:35.989 | 99.00th=[ 155], 99.50th=[ 174], 99.90th=[ 330], 99.95th=[ 351], 00:12:35.989 | 99.99th=[ 351] 00:12:35.989 bw ( KiB/s): min= 3584, max=18906, per=1.04%, avg=12918.53, stdev=4432.92, samples=19 00:12:35.989 iops : min= 28, max= 147, avg=100.79, stdev=34.47, samples=19 00:12:35.989 lat (msec) : 4=1.02%, 10=28.25%, 20=13.36%, 50=19.38%, 100=32.89% 00:12:35.989 lat (msec) : 250=4.90%, 500=0.20% 00:12:35.989 cpu : usr=0.67%, sys=0.44%, ctx=3288, majf=0, minf=3 00:12:35.989 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.989 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.989 issued rwts: total=960,1001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.989 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.989 job26: (groupid=0, jobs=1): err= 0: pid=81793: Tue Jul 23 05:03:36 2024 00:12:35.989 read: IOPS=107, BW=13.4MiB/s (14.1MB/s)(120MiB/8923msec) 00:12:35.989 slat (usec): min=6, max=1305, avg=56.13, stdev=109.32 00:12:35.989 clat (msec): min=2, max=136, avg=11.94, stdev=15.50 00:12:35.989 lat (msec): min=2, max=136, avg=11.99, stdev=15.50 00:12:35.989 clat percentiles (msec): 00:12:35.989 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:12:35.989 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:12:35.989 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 19], 95.00th=[ 28], 00:12:35.989 | 99.00th=[ 122], 99.50th=[ 127], 99.90th=[ 136], 99.95th=[ 136], 00:12:35.989 | 99.99th=[ 136] 00:12:35.989 write: IOPS=121, BW=15.2MiB/s (15.9MB/s)(130MiB/8573msec); 0 zone resets 00:12:35.989 slat (usec): min=36, max=6741, avg=154.71, stdev=301.37 00:12:35.989 clat (msec): min=25, max=221, avg=65.48, stdev=21.81 00:12:35.989 lat (msec): min=25, max=221, avg=65.64, stdev=21.79 00:12:35.989 clat percentiles (msec): 00:12:35.989 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 47], 00:12:35.989 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 68], 00:12:35.989 | 70.00th=[ 73], 80.00th=[ 79], 90.00th=[ 92], 95.00th=[ 105], 00:12:35.989 | 99.00th=[ 134], 99.50th=[ 140], 99.90th=[ 220], 99.95th=[ 222], 00:12:35.989 | 99.99th=[ 222] 00:12:35.989 bw ( KiB/s): min= 3072, max=20480, per=1.06%, avg=13216.30, stdev=4476.35, samples=20 00:12:35.989 iops : min= 24, max= 160, avg=103.10, stdev=34.91, samples=20 00:12:35.989 lat (msec) : 4=2.00%, 10=28.25%, 20=13.90%, 50=16.65%, 100=35.25% 00:12:35.989 lat (msec) : 250=3.95% 00:12:35.989 cpu : usr=0.66%, sys=0.43%, ctx=3422, majf=0, minf=1 00:12:35.989 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.989 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.989 issued rwts: total=960,1040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.989 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.989 job27: (groupid=0, jobs=1): err= 0: pid=81794: Tue Jul 23 05:03:36 2024 00:12:35.989 read: IOPS=121, BW=15.2MiB/s (16.0MB/s)(140MiB/9203msec) 00:12:35.989 slat (usec): min=6, max=1650, avg=64.58, stdev=131.35 00:12:35.990 clat (usec): min=3384, max=45142, avg=9618.12, stdev=4883.02 00:12:35.990 lat (usec): min=3399, max=45150, avg=9682.70, stdev=4876.08 00:12:35.990 clat percentiles (usec): 00:12:35.990 | 1.00th=[ 4359], 5.00th=[ 5014], 10.00th=[ 5473], 20.00th=[ 6259], 00:12:35.990 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 8094], 60.00th=[ 9634], 00:12:35.990 | 70.00th=[10814], 80.00th=[11863], 90.00th=[14877], 95.00th=[19006], 00:12:35.990 | 99.00th=[29230], 99.50th=[32375], 99.90th=[40633], 99.95th=[45351], 00:12:35.990 | 99.99th=[45351] 00:12:35.990 write: IOPS=134, BW=16.8MiB/s (17.6MB/s)(146MiB/8677msec); 0 zone resets 00:12:35.990 slat (usec): min=35, max=4023, avg=148.74, stdev=269.01 00:12:35.990 clat (msec): min=5, max=209, avg=58.96, stdev=22.58 00:12:35.990 lat (msec): min=6, max=210, avg=59.10, stdev=22.57 00:12:35.990 clat percentiles (msec): 00:12:35.990 | 1.00th=[ 16], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 43], 00:12:35.990 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 59], 00:12:35.990 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 87], 95.00th=[ 106], 00:12:35.990 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 205], 99.95th=[ 211], 00:12:35.990 | 99.99th=[ 211] 00:12:35.990 bw ( KiB/s): min= 6898, max=26112, per=1.19%, avg=14842.05, stdev=5516.01, samples=20 00:12:35.990 iops : min= 53, max= 204, avg=115.80, stdev=43.25, samples=20 00:12:35.990 lat (msec) : 4=0.22%, 10=30.46%, 20=17.05%, 50=22.07%, 100=27.32% 00:12:35.990 lat (msec) : 250=2.88% 00:12:35.990 cpu : usr=0.84%, sys=0.43%, ctx=3797, majf=0, minf=1 00:12:35.990 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.990 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.990 issued rwts: total=1120,1168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.990 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.990 job28: (groupid=0, jobs=1): err= 0: pid=81795: Tue Jul 23 05:03:36 2024 00:12:35.990 read: IOPS=105, BW=13.2MiB/s (13.9MB/s)(120MiB/9061msec) 00:12:35.990 slat (usec): min=6, max=1325, avg=60.78, stdev=125.69 00:12:35.990 clat (msec): min=2, max=112, avg=12.18, stdev=13.72 00:12:35.990 lat (msec): min=2, max=112, avg=12.24, stdev=13.72 00:12:35.990 clat percentiles (msec): 00:12:35.990 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:12:35.990 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:12:35.990 | 70.00th=[ 12], 80.00th=[ 16], 90.00th=[ 22], 95.00th=[ 29], 00:12:35.990 | 99.00th=[ 90], 99.50th=[ 110], 99.90th=[ 112], 99.95th=[ 112], 00:12:35.990 | 99.99th=[ 112] 00:12:35.990 write: IOPS=127, BW=15.9MiB/s (16.7MB/s)(136MiB/8541msec); 0 zone resets 00:12:35.990 slat (usec): min=31, max=2966, avg=136.50, stdev=201.00 00:12:35.990 clat (msec): min=24, max=292, avg=62.44, stdev=28.24 00:12:35.990 lat (msec): min=24, max=292, avg=62.58, stdev=28.24 00:12:35.990 clat percentiles (msec): 00:12:35.990 | 1.00th=[ 37], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 44], 00:12:35.990 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 62], 00:12:35.990 | 70.00th=[ 67], 80.00th=[ 74], 90.00th=[ 89], 95.00th=[ 105], 00:12:35.990 | 99.00th=[ 197], 99.50th=[ 211], 99.90th=[ 292], 99.95th=[ 292], 00:12:35.990 | 99.99th=[ 292] 00:12:35.990 bw ( KiB/s): min= 4096, max=23342, per=1.11%, avg=13796.10, stdev=6078.94, samples=20 00:12:35.990 iops : min= 32, max= 182, avg=107.65, stdev=47.46, samples=20 00:12:35.990 lat (msec) : 4=2.10%, 10=28.46%, 20=10.27%, 50=24.89%, 100=30.71% 00:12:35.990 lat (msec) : 250=3.33%, 500=0.24% 00:12:35.990 cpu : usr=0.67%, sys=0.47%, ctx=3409, majf=0, minf=3 00:12:35.990 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.990 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.990 issued rwts: total=960,1085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.990 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.990 job29: (groupid=0, jobs=1): err= 0: pid=81796: Tue Jul 23 05:03:36 2024 00:12:35.990 read: IOPS=109, BW=13.7MiB/s (14.4MB/s)(120MiB/8739msec) 00:12:35.990 slat (usec): min=6, max=2880, avg=70.90, stdev=166.95 00:12:35.990 clat (usec): min=3263, max=89100, avg=13423.80, stdev=13005.38 00:12:35.990 lat (usec): min=3342, max=89113, avg=13494.70, stdev=13003.40 00:12:35.990 clat percentiles (usec): 00:12:35.990 | 1.00th=[ 3687], 5.00th=[ 4621], 10.00th=[ 5538], 20.00th=[ 6783], 00:12:35.990 | 30.00th=[ 7635], 40.00th=[ 8586], 50.00th=[ 9896], 60.00th=[11207], 00:12:35.990 | 70.00th=[12649], 80.00th=[15664], 90.00th=[22152], 95.00th=[32900], 00:12:35.990 | 99.00th=[77071], 99.50th=[78119], 99.90th=[88605], 99.95th=[88605], 00:12:35.990 | 99.99th=[88605] 00:12:35.990 write: IOPS=128, BW=16.1MiB/s (16.9MB/s)(135MiB/8391msec); 0 zone resets 00:12:35.990 slat (usec): min=30, max=18426, avg=158.21, stdev=617.87 00:12:35.990 clat (msec): min=28, max=172, avg=61.43, stdev=18.48 00:12:35.990 lat (msec): min=28, max=172, avg=61.58, stdev=18.46 00:12:35.990 clat percentiles (msec): 00:12:35.990 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 47], 00:12:35.990 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 63], 00:12:35.990 | 70.00th=[ 68], 80.00th=[ 75], 90.00th=[ 88], 95.00th=[ 96], 00:12:35.990 | 99.00th=[ 124], 99.50th=[ 133], 99.90th=[ 157], 99.95th=[ 174], 00:12:35.990 | 99.99th=[ 174] 00:12:35.990 bw ( KiB/s): min= 3840, max=20777, per=1.11%, avg=13745.35, stdev=5584.48, samples=20 00:12:35.990 iops : min= 30, max= 162, avg=107.25, stdev=43.63, samples=20 00:12:35.990 lat (msec) : 4=0.73%, 10=23.47%, 20=17.39%, 50=21.07%, 100=35.28% 00:12:35.990 lat (msec) : 250=2.06% 00:12:35.990 cpu : usr=0.70%, sys=0.41%, ctx=3463, majf=0, minf=5 00:12:35.990 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.990 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.990 issued rwts: total=960,1081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.990 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.990 job30: (groupid=0, jobs=1): err= 0: pid=81797: Tue Jul 23 05:03:36 2024 00:12:35.990 read: IOPS=74, BW=9553KiB/s (9783kB/s)(80.0MiB/8575msec) 00:12:35.990 slat (usec): min=7, max=1622, avg=76.38, stdev=155.11 00:12:35.990 clat (usec): min=5723, max=71968, avg=19092.81, stdev=9565.29 00:12:35.990 lat (usec): min=5827, max=71979, avg=19169.20, stdev=9555.55 00:12:35.990 clat percentiles (usec): 00:12:35.990 | 1.00th=[ 6456], 5.00th=[ 8291], 10.00th=[ 9634], 20.00th=[11994], 00:12:35.990 | 30.00th=[14222], 40.00th=[16057], 50.00th=[16909], 60.00th=[18744], 00:12:35.990 | 70.00th=[20579], 80.00th=[24249], 90.00th=[29754], 95.00th=[35914], 00:12:35.990 | 99.00th=[56361], 99.50th=[64750], 99.90th=[71828], 99.95th=[71828], 00:12:35.990 | 99.99th=[71828] 00:12:35.990 write: IOPS=88, BW=11.0MiB/s (11.6MB/s)(93.9MiB/8497msec); 0 zone resets 00:12:35.990 slat (usec): min=37, max=5231, avg=140.65, stdev=282.08 00:12:35.990 clat (msec): min=45, max=374, avg=89.73, stdev=44.96 00:12:35.990 lat (msec): min=45, max=374, avg=89.87, stdev=44.96 00:12:35.990 clat percentiles (msec): 00:12:35.990 | 1.00th=[ 52], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 63], 00:12:35.990 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 81], 00:12:35.990 | 70.00th=[ 88], 80.00th=[ 106], 90.00th=[ 138], 95.00th=[ 180], 00:12:35.990 | 99.00th=[ 296], 99.50th=[ 342], 99.90th=[ 376], 99.95th=[ 376], 00:12:35.990 | 99.99th=[ 376] 00:12:35.990 bw ( KiB/s): min= 512, max=15616, per=0.80%, avg=9900.83, stdev=4582.81, samples=18 00:12:35.990 iops : min= 4, max= 122, avg=77.17, stdev=35.89, samples=18 00:12:35.990 lat (msec) : 10=5.25%, 20=26.10%, 50=14.02%, 100=43.06%, 250=10.57% 00:12:35.990 lat (msec) : 500=1.01% 00:12:35.990 cpu : usr=0.42%, sys=0.37%, ctx=2334, majf=0, minf=1 00:12:35.990 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.990 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.990 issued rwts: total=640,751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.990 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.990 job31: (groupid=0, jobs=1): err= 0: pid=81798: Tue Jul 23 05:03:36 2024 00:12:35.990 read: IOPS=72, BW=9308KiB/s (9531kB/s)(80.0MiB/8801msec) 00:12:35.990 slat (usec): min=7, max=991, avg=53.64, stdev=96.28 00:12:35.990 clat (usec): min=8215, max=75467, avg=17156.43, stdev=9634.88 00:12:35.990 lat (usec): min=8443, max=75479, avg=17210.07, stdev=9633.27 00:12:35.990 clat percentiles (usec): 00:12:35.990 | 1.00th=[ 8848], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10290], 00:12:35.990 | 30.00th=[11076], 40.00th=[12125], 50.00th=[13698], 60.00th=[16450], 00:12:35.990 | 70.00th=[18482], 80.00th=[21103], 90.00th=[29754], 95.00th=[35914], 00:12:35.990 | 99.00th=[55313], 99.50th=[65274], 99.90th=[74974], 99.95th=[74974], 00:12:35.990 | 99.99th=[74974] 00:12:35.990 write: IOPS=90, BW=11.3MiB/s (11.8MB/s)(97.5MiB/8658msec); 0 zone resets 00:12:35.990 slat (usec): min=36, max=2695, avg=151.98, stdev=240.06 00:12:35.990 clat (msec): min=18, max=323, avg=88.00, stdev=41.84 00:12:35.990 lat (msec): min=18, max=324, avg=88.16, stdev=41.85 00:12:35.990 clat percentiles (msec): 00:12:35.990 | 1.00th=[ 25], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 63], 00:12:35.990 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 79], 00:12:35.990 | 70.00th=[ 87], 80.00th=[ 101], 90.00th=[ 133], 95.00th=[ 184], 00:12:35.990 | 99.00th=[ 259], 99.50th=[ 266], 99.90th=[ 326], 99.95th=[ 326], 00:12:35.990 | 99.99th=[ 326] 00:12:35.990 bw ( KiB/s): min= 1024, max=17152, per=0.80%, avg=9894.40, stdev=4885.78, samples=20 00:12:35.990 iops : min= 8, max= 134, avg=77.30, stdev=38.17, samples=20 00:12:35.990 lat (msec) : 10=6.55%, 20=28.31%, 50=10.14%, 100=43.94%, 250=10.21% 00:12:35.990 lat (msec) : 500=0.85% 00:12:35.990 cpu : usr=0.50%, sys=0.31%, ctx=2460, majf=0, minf=1 00:12:35.990 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.990 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.990 issued rwts: total=640,780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.990 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.991 job32: (groupid=0, jobs=1): err= 0: pid=81799: Tue Jul 23 05:03:36 2024 00:12:35.991 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(80.0MiB/7856msec) 00:12:35.991 slat (usec): min=5, max=1886, avg=60.89, stdev=125.67 00:12:35.991 clat (msec): min=2, max=381, avg=20.54, stdev=44.69 00:12:35.991 lat (msec): min=3, max=381, avg=20.60, stdev=44.69 00:12:35.991 clat percentiles (msec): 00:12:35.991 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:12:35.991 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:12:35.991 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 27], 95.00th=[ 59], 00:12:35.991 | 99.00th=[ 372], 99.50th=[ 376], 99.90th=[ 380], 99.95th=[ 380], 00:12:35.991 | 99.99th=[ 380] 00:12:35.991 write: IOPS=77, BW=9888KiB/s (10.1MB/s)(81.0MiB/8388msec); 0 zone resets 00:12:35.991 slat (usec): min=31, max=7465, avg=163.69, stdev=367.95 00:12:35.991 clat (msec): min=55, max=291, avg=102.81, stdev=37.84 00:12:35.991 lat (msec): min=56, max=292, avg=102.97, stdev=37.83 00:12:35.991 clat percentiles (msec): 00:12:35.991 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 70], 00:12:35.991 | 30.00th=[ 75], 40.00th=[ 82], 50.00th=[ 91], 60.00th=[ 108], 00:12:35.991 | 70.00th=[ 123], 80.00th=[ 138], 90.00th=[ 157], 95.00th=[ 169], 00:12:35.991 | 99.00th=[ 209], 99.50th=[ 243], 99.90th=[ 292], 99.95th=[ 292], 00:12:35.991 | 99.99th=[ 292] 00:12:35.991 bw ( KiB/s): min= 768, max=14621, per=0.67%, avg=8310.26, stdev=3759.30, samples=19 00:12:35.991 iops : min= 6, max= 114, avg=64.79, stdev=29.37, samples=19 00:12:35.991 lat (msec) : 4=0.23%, 10=22.83%, 20=19.25%, 50=4.27%, 100=30.20% 00:12:35.991 lat (msec) : 250=22.44%, 500=0.78% 00:12:35.991 cpu : usr=0.38%, sys=0.33%, ctx=2236, majf=0, minf=5 00:12:35.991 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.991 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.991 issued rwts: total=640,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.991 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.991 job33: (groupid=0, jobs=1): err= 0: pid=81800: Tue Jul 23 05:03:36 2024 00:12:35.991 read: IOPS=68, BW=8730KiB/s (8939kB/s)(80.0MiB/9384msec) 00:12:35.991 slat (usec): min=8, max=5106, avg=81.54, stdev=306.39 00:12:35.991 clat (msec): min=3, max=134, avg=19.41, stdev=19.55 00:12:35.991 lat (msec): min=3, max=134, avg=19.49, stdev=19.55 00:12:35.991 clat percentiles (msec): 00:12:35.991 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:12:35.991 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:12:35.991 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 32], 95.00th=[ 50], 00:12:35.991 | 99.00th=[ 126], 99.50th=[ 127], 99.90th=[ 136], 99.95th=[ 136], 00:12:35.991 | 99.99th=[ 136] 00:12:35.991 write: IOPS=93, BW=11.7MiB/s (12.2MB/s)(99.0MiB/8485msec); 0 zone resets 00:12:35.991 slat (usec): min=33, max=3463, avg=148.35, stdev=251.44 00:12:35.991 clat (msec): min=2, max=273, avg=85.07, stdev=43.20 00:12:35.991 lat (msec): min=2, max=273, avg=85.22, stdev=43.22 00:12:35.991 clat percentiles (msec): 00:12:35.991 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 20], 20.00th=[ 63], 00:12:35.991 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 86], 00:12:35.991 | 70.00th=[ 100], 80.00th=[ 125], 90.00th=[ 142], 95.00th=[ 163], 00:12:35.991 | 99.00th=[ 194], 99.50th=[ 209], 99.90th=[ 275], 99.95th=[ 275], 00:12:35.991 | 99.99th=[ 275] 00:12:35.991 bw ( KiB/s): min= 2810, max=35584, per=0.81%, avg=10041.15, stdev=7090.00, samples=20 00:12:35.991 iops : min= 21, max= 278, avg=78.15, stdev=55.47, samples=20 00:12:35.991 lat (msec) : 4=1.19%, 10=12.15%, 20=26.61%, 50=9.08%, 100=33.80% 00:12:35.991 lat (msec) : 250=17.04%, 500=0.14% 00:12:35.991 cpu : usr=0.53%, sys=0.31%, ctx=2472, majf=0, minf=7 00:12:35.991 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.991 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.991 issued rwts: total=640,792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.991 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.991 job34: (groupid=0, jobs=1): err= 0: pid=81801: Tue Jul 23 05:03:36 2024 00:12:35.991 read: IOPS=72, BW=9334KiB/s (9558kB/s)(69.6MiB/7638msec) 00:12:35.991 slat (usec): min=7, max=1054, avg=72.75, stdev=142.11 00:12:35.991 clat (msec): min=3, max=146, avg=15.45, stdev=19.05 00:12:35.991 lat (msec): min=3, max=146, avg=15.52, stdev=19.05 00:12:35.991 clat percentiles (msec): 00:12:35.991 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 7], 00:12:35.991 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 13], 00:12:35.991 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 26], 95.00th=[ 47], 00:12:35.991 | 99.00th=[ 114], 99.50th=[ 121], 99.90th=[ 146], 99.95th=[ 146], 00:12:35.991 | 99.99th=[ 146] 00:12:35.991 write: IOPS=71, BW=9189KiB/s (9410kB/s)(80.0MiB/8915msec); 0 zone resets 00:12:35.991 slat (usec): min=39, max=1907, avg=159.78, stdev=228.66 00:12:35.991 clat (msec): min=57, max=393, avg=110.81, stdev=50.32 00:12:35.991 lat (msec): min=57, max=393, avg=110.97, stdev=50.34 00:12:35.991 clat percentiles (msec): 00:12:35.991 | 1.00th=[ 59], 5.00th=[ 62], 10.00th=[ 65], 20.00th=[ 71], 00:12:35.991 | 30.00th=[ 78], 40.00th=[ 86], 50.00th=[ 97], 60.00th=[ 111], 00:12:35.991 | 70.00th=[ 128], 80.00th=[ 142], 90.00th=[ 169], 95.00th=[ 211], 00:12:35.991 | 99.00th=[ 300], 99.50th=[ 338], 99.90th=[ 393], 99.95th=[ 393], 00:12:35.991 | 99.99th=[ 393] 00:12:35.991 bw ( KiB/s): min= 1024, max=13824, per=0.65%, avg=8121.74, stdev=3788.24, samples=19 00:12:35.991 iops : min= 8, max= 108, avg=63.32, stdev=29.57, samples=19 00:12:35.991 lat (msec) : 4=0.08%, 10=24.64%, 20=15.46%, 50=4.09%, 100=28.65% 00:12:35.991 lat (msec) : 250=25.81%, 500=1.25% 00:12:35.991 cpu : usr=0.44%, sys=0.24%, ctx=2098, majf=0, minf=7 00:12:35.991 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.991 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.991 issued rwts: total=557,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.991 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.991 job35: (groupid=0, jobs=1): err= 0: pid=81802: Tue Jul 23 05:03:36 2024 00:12:35.991 read: IOPS=71, BW=9204KiB/s (9425kB/s)(80.0MiB/8900msec) 00:12:35.991 slat (usec): min=5, max=858, avg=50.57, stdev=102.03 00:12:35.991 clat (msec): min=3, max=142, avg=16.04, stdev=14.45 00:12:35.991 lat (msec): min=3, max=142, avg=16.09, stdev=14.45 00:12:35.991 clat percentiles (msec): 00:12:35.991 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:12:35.991 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 15], 00:12:35.991 | 70.00th=[ 16], 80.00th=[ 19], 90.00th=[ 26], 95.00th=[ 35], 00:12:35.991 | 99.00th=[ 88], 99.50th=[ 103], 99.90th=[ 142], 99.95th=[ 142], 00:12:35.991 | 99.99th=[ 142] 00:12:35.991 write: IOPS=91, BW=11.4MiB/s (12.0MB/s)(100MiB/8757msec); 0 zone resets 00:12:35.991 slat (usec): min=38, max=2522, avg=152.78, stdev=248.48 00:12:35.991 clat (msec): min=5, max=274, avg=86.68, stdev=37.88 00:12:35.991 lat (msec): min=5, max=274, avg=86.83, stdev=37.89 00:12:35.991 clat percentiles (msec): 00:12:35.991 | 1.00th=[ 10], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 63], 00:12:35.991 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 82], 00:12:35.991 | 70.00th=[ 92], 80.00th=[ 110], 90.00th=[ 144], 95.00th=[ 165], 00:12:35.991 | 99.00th=[ 192], 99.50th=[ 228], 99.90th=[ 275], 99.95th=[ 275], 00:12:35.991 | 99.99th=[ 275] 00:12:35.991 bw ( KiB/s): min= 2560, max=22272, per=0.82%, avg=10147.90, stdev=4888.67, samples=20 00:12:35.991 iops : min= 20, max= 174, avg=79.20, stdev=38.16, samples=20 00:12:35.991 lat (msec) : 4=0.07%, 10=14.31%, 20=25.21%, 50=5.56%, 100=40.83% 00:12:35.991 lat (msec) : 250=13.89%, 500=0.14% 00:12:35.991 cpu : usr=0.55%, sys=0.27%, ctx=2406, majf=0, minf=3 00:12:35.991 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.991 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.991 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.991 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.991 job36: (groupid=0, jobs=1): err= 0: pid=81803: Tue Jul 23 05:03:36 2024 00:12:35.991 read: IOPS=73, BW=9469KiB/s (9697kB/s)(80.0MiB/8651msec) 00:12:35.991 slat (usec): min=6, max=1161, avg=72.13, stdev=135.07 00:12:35.991 clat (msec): min=6, max=140, avg=19.13, stdev=13.74 00:12:35.991 lat (msec): min=6, max=140, avg=19.20, stdev=13.73 00:12:35.991 clat percentiles (msec): 00:12:35.991 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:12:35.992 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 18], 00:12:35.992 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 32], 95.00th=[ 36], 00:12:35.992 | 99.00th=[ 99], 99.50th=[ 120], 99.90th=[ 140], 99.95th=[ 140], 00:12:35.992 | 99.99th=[ 140] 00:12:35.992 write: IOPS=89, BW=11.1MiB/s (11.7MB/s)(94.6MiB/8505msec); 0 zone resets 00:12:35.992 slat (usec): min=37, max=2055, avg=154.62, stdev=231.72 00:12:35.992 clat (msec): min=25, max=322, avg=88.99, stdev=41.06 00:12:35.992 lat (msec): min=25, max=322, avg=89.14, stdev=41.05 00:12:35.992 clat percentiles (msec): 00:12:35.992 | 1.00th=[ 32], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 64], 00:12:35.992 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 81], 00:12:35.992 | 70.00th=[ 90], 80.00th=[ 106], 90.00th=[ 138], 95.00th=[ 178], 00:12:35.992 | 99.00th=[ 264], 99.50th=[ 284], 99.90th=[ 321], 99.95th=[ 321], 00:12:35.992 | 99.99th=[ 321] 00:12:35.992 bw ( KiB/s): min= 1792, max=16128, per=0.81%, avg=10102.89, stdev=4628.90, samples=19 00:12:35.992 iops : min= 14, max= 126, avg=78.79, stdev=36.11, samples=19 00:12:35.992 lat (msec) : 10=7.52%, 20=24.05%, 50=13.96%, 100=41.37%, 250=12.46% 00:12:35.992 lat (msec) : 500=0.64% 00:12:35.992 cpu : usr=0.50%, sys=0.29%, ctx=2426, majf=0, minf=3 00:12:35.992 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.992 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.992 issued rwts: total=640,757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.992 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.992 job37: (groupid=0, jobs=1): err= 0: pid=81808: Tue Jul 23 05:03:36 2024 00:12:35.992 read: IOPS=74, BW=9584KiB/s (9814kB/s)(80.0MiB/8548msec) 00:12:35.992 slat (usec): min=5, max=1523, avg=76.91, stdev=154.76 00:12:35.992 clat (usec): min=7438, max=60018, avg=18304.49, stdev=7767.29 00:12:35.992 lat (usec): min=7459, max=60030, avg=18381.40, stdev=7761.32 00:12:35.992 clat percentiles (usec): 00:12:35.992 | 1.00th=[ 7963], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[12518], 00:12:35.992 | 30.00th=[13566], 40.00th=[15270], 50.00th=[16909], 60.00th=[18482], 00:12:35.992 | 70.00th=[19530], 80.00th=[22152], 90.00th=[30278], 95.00th=[35914], 00:12:35.992 | 99.00th=[42206], 99.50th=[50070], 99.90th=[60031], 99.95th=[60031], 00:12:35.992 | 99.99th=[60031] 00:12:35.992 write: IOPS=89, BW=11.2MiB/s (11.7MB/s)(95.5MiB/8560msec); 0 zone resets 00:12:35.992 slat (usec): min=39, max=3218, avg=152.30, stdev=237.47 00:12:35.992 clat (msec): min=52, max=328, avg=88.74, stdev=41.80 00:12:35.992 lat (msec): min=52, max=328, avg=88.89, stdev=41.80 00:12:35.992 clat percentiles (msec): 00:12:35.992 | 1.00th=[ 57], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 64], 00:12:35.992 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 80], 00:12:35.992 | 70.00th=[ 88], 80.00th=[ 102], 90.00th=[ 134], 95.00th=[ 176], 00:12:35.992 | 99.00th=[ 271], 99.50th=[ 300], 99.90th=[ 330], 99.95th=[ 330], 00:12:35.992 | 99.99th=[ 330] 00:12:35.992 bw ( KiB/s): min= 1788, max=15329, per=0.81%, avg=10043.56, stdev=4505.78, samples=18 00:12:35.992 iops : min= 13, max= 119, avg=78.28, stdev=35.28, samples=18 00:12:35.992 lat (msec) : 10=4.34%, 20=28.99%, 50=12.04%, 100=43.52%, 250=10.26% 00:12:35.992 lat (msec) : 500=0.85% 00:12:35.992 cpu : usr=0.55%, sys=0.25%, ctx=2403, majf=0, minf=1 00:12:35.992 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.992 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.992 issued rwts: total=640,764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.992 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.992 job38: (groupid=0, jobs=1): err= 0: pid=81809: Tue Jul 23 05:03:36 2024 00:12:35.992 read: IOPS=74, BW=9532KiB/s (9761kB/s)(80.0MiB/8594msec) 00:12:35.992 slat (usec): min=7, max=1384, avg=70.74, stdev=143.37 00:12:35.992 clat (usec): min=8080, max=64137, avg=16469.42, stdev=7737.68 00:12:35.992 lat (usec): min=8133, max=65522, avg=16540.15, stdev=7754.68 00:12:35.992 clat percentiles (usec): 00:12:35.992 | 1.00th=[ 8291], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10290], 00:12:35.992 | 30.00th=[11207], 40.00th=[12518], 50.00th=[14615], 60.00th=[17171], 00:12:35.992 | 70.00th=[18744], 80.00th=[21103], 90.00th=[23987], 95.00th=[28705], 00:12:35.992 | 99.00th=[47449], 99.50th=[63701], 99.90th=[64226], 99.95th=[64226], 00:12:35.992 | 99.99th=[64226] 00:12:35.992 write: IOPS=91, BW=11.4MiB/s (11.9MB/s)(99.2MiB/8710msec); 0 zone resets 00:12:35.992 slat (usec): min=40, max=5915, avg=155.38, stdev=314.22 00:12:35.992 clat (msec): min=39, max=329, avg=86.91, stdev=39.87 00:12:35.992 lat (msec): min=39, max=329, avg=87.06, stdev=39.87 00:12:35.992 clat percentiles (msec): 00:12:35.992 | 1.00th=[ 46], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 63], 00:12:35.992 | 30.00th=[ 66], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 79], 00:12:35.992 | 70.00th=[ 87], 80.00th=[ 99], 90.00th=[ 127], 95.00th=[ 176], 00:12:35.992 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 330], 99.95th=[ 330], 00:12:35.992 | 99.99th=[ 330] 00:12:35.992 bw ( KiB/s): min= 2048, max=15872, per=0.85%, avg=10602.53, stdev=4507.95, samples=19 00:12:35.992 iops : min= 16, max= 124, avg=82.68, stdev=35.38, samples=19 00:12:35.992 lat (msec) : 10=6.28%, 20=26.99%, 50=11.58%, 100=44.63%, 250=10.25% 00:12:35.992 lat (msec) : 500=0.28% 00:12:35.992 cpu : usr=0.56%, sys=0.28%, ctx=2414, majf=0, minf=3 00:12:35.992 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.992 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.992 issued rwts: total=640,794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.992 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.992 job39: (groupid=0, jobs=1): err= 0: pid=81810: Tue Jul 23 05:03:36 2024 00:12:35.992 read: IOPS=73, BW=9400KiB/s (9625kB/s)(80.0MiB/8715msec) 00:12:35.992 slat (usec): min=6, max=2108, avg=69.59, stdev=148.25 00:12:35.992 clat (msec): min=4, max=137, avg=20.47, stdev=15.06 00:12:35.992 lat (msec): min=4, max=137, avg=20.54, stdev=15.06 00:12:35.992 clat percentiles (msec): 00:12:35.992 | 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:12:35.992 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 19], 00:12:35.992 | 70.00th=[ 22], 80.00th=[ 26], 90.00th=[ 33], 95.00th=[ 41], 00:12:35.992 | 99.00th=[ 113], 99.50th=[ 128], 99.90th=[ 138], 99.95th=[ 138], 00:12:35.992 | 99.99th=[ 138] 00:12:35.992 write: IOPS=80, BW=10.1MiB/s (10.6MB/s)(84.4MiB/8381msec); 0 zone resets 00:12:35.992 slat (usec): min=37, max=5127, avg=164.58, stdev=323.58 00:12:35.992 clat (msec): min=32, max=387, avg=98.48, stdev=51.67 00:12:35.992 lat (msec): min=32, max=387, avg=98.64, stdev=51.66 00:12:35.992 clat percentiles (msec): 00:12:35.992 | 1.00th=[ 38], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 63], 00:12:35.992 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 91], 00:12:35.992 | 70.00th=[ 108], 80.00th=[ 131], 90.00th=[ 150], 95.00th=[ 192], 00:12:35.992 | 99.00th=[ 342], 99.50th=[ 368], 99.90th=[ 388], 99.95th=[ 388], 00:12:35.992 | 99.99th=[ 388] 00:12:35.992 bw ( KiB/s): min= 768, max=16384, per=0.69%, avg=8547.80, stdev=4860.78, samples=20 00:12:35.993 iops : min= 6, max= 128, avg=66.65, stdev=37.98, samples=20 00:12:35.993 lat (msec) : 10=4.64%, 20=27.91%, 50=15.36%, 100=34.22%, 250=16.65% 00:12:35.993 lat (msec) : 500=1.22% 00:12:35.993 cpu : usr=0.43%, sys=0.32%, ctx=2250, majf=0, minf=3 00:12:35.993 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.993 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.993 issued rwts: total=640,675,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.993 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.993 job40: (groupid=0, jobs=1): err= 0: pid=81812: Tue Jul 23 05:03:36 2024 00:12:35.993 read: IOPS=73, BW=9442KiB/s (9669kB/s)(80.0MiB/8676msec) 00:12:35.993 slat (usec): min=6, max=1573, avg=68.44, stdev=146.77 00:12:35.993 clat (msec): min=6, max=105, avg=14.95, stdev=11.03 00:12:35.993 lat (msec): min=6, max=105, avg=15.02, stdev=11.03 00:12:35.993 clat percentiles (msec): 00:12:35.993 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:12:35.993 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:12:35.993 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 23], 95.00th=[ 32], 00:12:35.993 | 99.00th=[ 71], 99.50th=[ 97], 99.90th=[ 106], 99.95th=[ 106], 00:12:35.993 | 99.99th=[ 106] 00:12:35.993 write: IOPS=89, BW=11.2MiB/s (11.7MB/s)(98.6MiB/8822msec); 0 zone resets 00:12:35.993 slat (usec): min=36, max=3052, avg=151.32, stdev=259.39 00:12:35.993 clat (msec): min=43, max=381, avg=88.59, stdev=43.34 00:12:35.993 lat (msec): min=43, max=381, avg=88.75, stdev=43.35 00:12:35.993 clat percentiles (msec): 00:12:35.993 | 1.00th=[ 49], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 60], 00:12:35.993 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 82], 00:12:35.993 | 70.00th=[ 92], 80.00th=[ 108], 90.00th=[ 142], 95.00th=[ 167], 00:12:35.993 | 99.00th=[ 284], 99.50th=[ 334], 99.90th=[ 380], 99.95th=[ 380], 00:12:35.993 | 99.99th=[ 380] 00:12:35.993 bw ( KiB/s): min= 1280, max=17920, per=0.80%, avg=9994.60, stdev=4800.31, samples=20 00:12:35.993 iops : min= 10, max= 140, avg=77.95, stdev=37.59, samples=20 00:12:35.993 lat (msec) : 10=11.27%, 20=26.94%, 50=6.93%, 100=41.57%, 250=12.53% 00:12:35.993 lat (msec) : 500=0.77% 00:12:35.993 cpu : usr=0.50%, sys=0.32%, ctx=2352, majf=0, minf=1 00:12:35.993 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.993 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.993 issued rwts: total=640,789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.993 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.993 job41: (groupid=0, jobs=1): err= 0: pid=81813: Tue Jul 23 05:03:36 2024 00:12:35.993 read: IOPS=73, BW=9390KiB/s (9616kB/s)(81.0MiB/8833msec) 00:12:35.993 slat (usec): min=6, max=1575, avg=72.29, stdev=145.35 00:12:35.993 clat (msec): min=6, max=151, avg=21.41, stdev=16.54 00:12:35.993 lat (msec): min=6, max=151, avg=21.48, stdev=16.55 00:12:35.993 clat percentiles (msec): 00:12:35.993 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:12:35.993 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 18], 60.00th=[ 21], 00:12:35.993 | 70.00th=[ 24], 80.00th=[ 27], 90.00th=[ 35], 95.00th=[ 45], 00:12:35.993 | 99.00th=[ 115], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 153], 00:12:35.993 | 99.99th=[ 153] 00:12:35.993 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(100MiB/8269msec); 0 zone resets 00:12:35.993 slat (usec): min=37, max=3332, avg=151.10, stdev=266.96 00:12:35.993 clat (msec): min=2, max=272, avg=81.85, stdev=38.00 00:12:35.993 lat (msec): min=2, max=272, avg=82.00, stdev=38.00 00:12:35.993 clat percentiles (msec): 00:12:35.993 | 1.00th=[ 12], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 60], 00:12:35.993 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 77], 00:12:35.993 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 126], 95.00th=[ 167], 00:12:35.993 | 99.00th=[ 239], 99.50th=[ 264], 99.90th=[ 271], 99.95th=[ 271], 00:12:35.993 | 99.99th=[ 271] 00:12:35.993 bw ( KiB/s): min= 1277, max=20521, per=0.82%, avg=10238.65, stdev=5480.78, samples=20 00:12:35.993 iops : min= 9, max= 160, avg=79.80, stdev=42.88, samples=20 00:12:35.993 lat (msec) : 4=0.28%, 10=6.49%, 20=20.93%, 50=17.13%, 100=45.86% 00:12:35.993 lat (msec) : 250=8.84%, 500=0.48% 00:12:35.993 cpu : usr=0.55%, sys=0.29%, ctx=2454, majf=0, minf=5 00:12:35.993 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.993 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.993 issued rwts: total=648,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.993 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.993 job42: (groupid=0, jobs=1): err= 0: pid=81814: Tue Jul 23 05:03:36 2024 00:12:35.993 read: IOPS=76, BW=9762KiB/s (9996kB/s)(80.0MiB/8392msec) 00:12:35.993 slat (usec): min=7, max=2727, avg=72.09, stdev=184.47 00:12:35.993 clat (usec): min=6194, max=51923, avg=14471.62, stdev=6597.57 00:12:35.993 lat (usec): min=6219, max=51977, avg=14543.71, stdev=6584.04 00:12:35.993 clat percentiles (usec): 00:12:35.993 | 1.00th=[ 6652], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 8848], 00:12:35.993 | 30.00th=[11338], 40.00th=[11994], 50.00th=[13042], 60.00th=[14484], 00:12:35.993 | 70.00th=[15401], 80.00th=[17433], 90.00th=[22414], 95.00th=[26346], 00:12:35.993 | 99.00th=[40633], 99.50th=[44827], 99.90th=[52167], 99.95th=[52167], 00:12:35.993 | 99.99th=[52167] 00:12:35.993 write: IOPS=86, BW=10.8MiB/s (11.3MB/s)(95.8MiB/8886msec); 0 zone resets 00:12:35.993 slat (usec): min=38, max=2435, avg=160.49, stdev=220.11 00:12:35.993 clat (msec): min=24, max=355, avg=91.84, stdev=41.24 00:12:35.993 lat (msec): min=24, max=355, avg=92.00, stdev=41.26 00:12:35.993 clat percentiles (msec): 00:12:35.993 | 1.00th=[ 30], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 63], 00:12:35.993 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 89], 00:12:35.993 | 70.00th=[ 101], 80.00th=[ 116], 90.00th=[ 144], 95.00th=[ 171], 00:12:35.993 | 99.00th=[ 236], 99.50th=[ 330], 99.90th=[ 355], 99.95th=[ 355], 00:12:35.993 | 99.99th=[ 355] 00:12:35.993 bw ( KiB/s): min= 2304, max=16896, per=0.78%, avg=9697.00, stdev=4497.13, samples=20 00:12:35.993 iops : min= 18, max= 132, avg=75.60, stdev=35.16, samples=20 00:12:35.993 lat (msec) : 10=10.46%, 20=27.45%, 50=8.68%, 100=37.13%, 250=15.79% 00:12:35.993 lat (msec) : 500=0.50% 00:12:35.993 cpu : usr=0.51%, sys=0.31%, ctx=2434, majf=0, minf=3 00:12:35.993 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.993 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.993 issued rwts: total=640,766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.993 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.993 job43: (groupid=0, jobs=1): err= 0: pid=81815: Tue Jul 23 05:03:36 2024 00:12:35.993 read: IOPS=83, BW=10.5MiB/s (11.0MB/s)(92.9MiB/8884msec) 00:12:35.993 slat (usec): min=6, max=1323, avg=85.19, stdev=151.36 00:12:35.993 clat (usec): min=4645, max=70585, avg=16641.17, stdev=10717.60 00:12:35.993 lat (usec): min=4661, max=70600, avg=16726.36, stdev=10724.21 00:12:35.993 clat percentiles (usec): 00:12:35.993 | 1.00th=[ 5800], 5.00th=[ 6849], 10.00th=[ 7439], 20.00th=[ 8717], 00:12:35.993 | 30.00th=[10028], 40.00th=[11600], 50.00th=[13304], 60.00th=[16057], 00:12:35.993 | 70.00th=[17957], 80.00th=[21890], 90.00th=[28705], 95.00th=[40109], 00:12:35.994 | 99.00th=[56886], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:12:35.994 | 99.99th=[70779] 00:12:35.994 write: IOPS=94, BW=11.8MiB/s (12.4MB/s)(100MiB/8439msec); 0 zone resets 00:12:35.994 slat (usec): min=38, max=4173, avg=165.54, stdev=305.24 00:12:35.994 clat (msec): min=6, max=346, avg=83.59, stdev=41.53 00:12:35.994 lat (msec): min=7, max=346, avg=83.76, stdev=41.54 00:12:35.994 clat percentiles (msec): 00:12:35.994 | 1.00th=[ 12], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 59], 00:12:35.994 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 79], 00:12:35.994 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 128], 95.00th=[ 171], 00:12:35.994 | 99.00th=[ 247], 99.50th=[ 305], 99.90th=[ 347], 99.95th=[ 347], 00:12:35.994 | 99.99th=[ 347] 00:12:35.994 bw ( KiB/s): min= 1277, max=21504, per=0.84%, avg=10492.11, stdev=5584.58, samples=19 00:12:35.994 iops : min= 9, max= 168, avg=81.74, stdev=43.78, samples=19 00:12:35.994 lat (msec) : 10=14.39%, 20=24.11%, 50=10.69%, 100=41.41%, 250=8.94% 00:12:35.994 lat (msec) : 500=0.45% 00:12:35.994 cpu : usr=0.54%, sys=0.31%, ctx=2699, majf=0, minf=1 00:12:35.994 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.994 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.994 issued rwts: total=743,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.994 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.994 job44: (groupid=0, jobs=1): err= 0: pid=81822: Tue Jul 23 05:03:36 2024 00:12:35.994 read: IOPS=80, BW=10.1MiB/s (10.6MB/s)(80.0MiB/7931msec) 00:12:35.994 slat (usec): min=7, max=2093, avg=87.29, stdev=183.35 00:12:35.994 clat (msec): min=3, max=177, avg=19.56, stdev=29.85 00:12:35.994 lat (msec): min=3, max=177, avg=19.65, stdev=29.85 00:12:35.994 clat percentiles (msec): 00:12:35.994 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:12:35.994 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:12:35.994 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 24], 95.00th=[ 81], 00:12:35.994 | 99.00th=[ 171], 99.50th=[ 174], 99.90th=[ 178], 99.95th=[ 178], 00:12:35.994 | 99.99th=[ 178] 00:12:35.994 write: IOPS=79, BW=9.89MiB/s (10.4MB/s)(83.5MiB/8443msec); 0 zone resets 00:12:35.994 slat (usec): min=38, max=2911, avg=161.85, stdev=255.88 00:12:35.994 clat (msec): min=33, max=358, avg=100.49, stdev=47.28 00:12:35.994 lat (msec): min=33, max=358, avg=100.65, stdev=47.29 00:12:35.994 clat percentiles (msec): 00:12:35.994 | 1.00th=[ 52], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 65], 00:12:35.994 | 30.00th=[ 70], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 94], 00:12:35.994 | 70.00th=[ 113], 80.00th=[ 130], 90.00th=[ 167], 95.00th=[ 197], 00:12:35.994 | 99.00th=[ 271], 99.50th=[ 292], 99.90th=[ 359], 99.95th=[ 359], 00:12:35.994 | 99.99th=[ 359] 00:12:35.994 bw ( KiB/s): min= 2043, max=15104, per=0.69%, avg=8622.53, stdev=4028.97, samples=19 00:12:35.994 iops : min= 15, max= 118, avg=67.21, stdev=31.56, samples=19 00:12:35.994 lat (msec) : 4=0.38%, 10=17.35%, 20=24.31%, 50=4.13%, 100=33.87% 00:12:35.994 lat (msec) : 250=19.04%, 500=0.92% 00:12:35.994 cpu : usr=0.47%, sys=0.28%, ctx=2327, majf=0, minf=11 00:12:35.994 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.994 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.994 issued rwts: total=640,668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.994 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.994 job45: (groupid=0, jobs=1): err= 0: pid=81823: Tue Jul 23 05:03:36 2024 00:12:35.994 read: IOPS=78, BW=9.79MiB/s (10.3MB/s)(85.4MiB/8724msec) 00:12:35.994 slat (usec): min=8, max=3004, avg=85.25, stdev=206.71 00:12:35.994 clat (usec): min=6503, max=78850, avg=15971.86, stdev=9703.14 00:12:35.994 lat (usec): min=6545, max=78860, avg=16057.11, stdev=9694.90 00:12:35.994 clat percentiles (usec): 00:12:35.994 | 1.00th=[ 7308], 5.00th=[ 7832], 10.00th=[ 8291], 20.00th=[ 9241], 00:12:35.994 | 30.00th=[10159], 40.00th=[11600], 50.00th=[13042], 60.00th=[15008], 00:12:35.994 | 70.00th=[16909], 80.00th=[20055], 90.00th=[28181], 95.00th=[34866], 00:12:35.994 | 99.00th=[56886], 99.50th=[67634], 99.90th=[79168], 99.95th=[79168], 00:12:35.994 | 99.99th=[79168] 00:12:35.994 write: IOPS=93, BW=11.7MiB/s (12.3MB/s)(100MiB/8525msec); 0 zone resets 00:12:35.994 slat (usec): min=38, max=9959, avg=161.35, stdev=438.81 00:12:35.994 clat (msec): min=15, max=314, avg=84.12, stdev=40.15 00:12:35.994 lat (msec): min=16, max=314, avg=84.28, stdev=40.14 00:12:35.994 clat percentiles (msec): 00:12:35.994 | 1.00th=[ 24], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 59], 00:12:35.994 | 30.00th=[ 63], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 79], 00:12:35.994 | 70.00th=[ 87], 80.00th=[ 99], 90.00th=[ 127], 95.00th=[ 161], 00:12:35.994 | 99.00th=[ 249], 99.50th=[ 284], 99.90th=[ 317], 99.95th=[ 317], 00:12:35.994 | 99.99th=[ 317] 00:12:35.994 bw ( KiB/s): min= 2048, max=19200, per=0.84%, avg=10425.11, stdev=5064.89, samples=19 00:12:35.994 iops : min= 16, max= 150, avg=81.21, stdev=39.77, samples=19 00:12:35.994 lat (msec) : 10=13.42%, 20=23.67%, 50=9.64%, 100=43.09%, 250=9.64% 00:12:35.994 lat (msec) : 500=0.54% 00:12:35.994 cpu : usr=0.57%, sys=0.28%, ctx=2545, majf=0, minf=3 00:12:35.994 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.994 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.994 issued rwts: total=683,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.994 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.994 job46: (groupid=0, jobs=1): err= 0: pid=81824: Tue Jul 23 05:03:36 2024 00:12:35.994 read: IOPS=76, BW=9761KiB/s (9995kB/s)(80.0MiB/8393msec) 00:12:35.994 slat (usec): min=5, max=1486, avg=69.62, stdev=128.13 00:12:35.994 clat (usec): min=4334, max=44812, avg=11843.06, stdev=6214.15 00:12:35.994 lat (usec): min=4491, max=44994, avg=11912.67, stdev=6217.35 00:12:35.994 clat percentiles (usec): 00:12:35.994 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5604], 20.00th=[ 6980], 00:12:35.994 | 30.00th=[ 7963], 40.00th=[ 8848], 50.00th=[10814], 60.00th=[12125], 00:12:35.994 | 70.00th=[13960], 80.00th=[15008], 90.00th=[18482], 95.00th=[23462], 00:12:35.994 | 99.00th=[35390], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:12:35.994 | 99.99th=[44827] 00:12:35.994 write: IOPS=85, BW=10.6MiB/s (11.2MB/s)(96.6MiB/9079msec); 0 zone resets 00:12:35.994 slat (usec): min=30, max=2732, avg=153.67, stdev=264.05 00:12:35.994 clat (msec): min=30, max=337, avg=93.08, stdev=40.80 00:12:35.994 lat (msec): min=30, max=337, avg=93.24, stdev=40.80 00:12:35.994 clat percentiles (msec): 00:12:35.994 | 1.00th=[ 37], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 63], 00:12:35.994 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 89], 00:12:35.994 | 70.00th=[ 101], 80.00th=[ 118], 90.00th=[ 146], 95.00th=[ 176], 00:12:35.994 | 99.00th=[ 241], 99.50th=[ 292], 99.90th=[ 338], 99.95th=[ 338], 00:12:35.994 | 99.99th=[ 338] 00:12:35.995 bw ( KiB/s): min= 1792, max=16160, per=0.79%, avg=9806.40, stdev=4095.32, samples=20 00:12:35.995 iops : min= 14, max= 126, avg=76.60, stdev=31.97, samples=20 00:12:35.995 lat (msec) : 10=20.45%, 20=21.37%, 50=4.10%, 100=37.65%, 250=15.92% 00:12:35.995 lat (msec) : 500=0.50% 00:12:35.995 cpu : usr=0.54%, sys=0.25%, ctx=2455, majf=0, minf=1 00:12:35.995 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.995 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.995 issued rwts: total=640,773,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.995 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.995 job47: (groupid=0, jobs=1): err= 0: pid=81825: Tue Jul 23 05:03:36 2024 00:12:35.995 read: IOPS=78, BW=9999KiB/s (10.2MB/s)(80.0MiB/8193msec) 00:12:35.995 slat (usec): min=6, max=1025, avg=59.24, stdev=112.00 00:12:35.995 clat (msec): min=4, max=275, avg=20.28, stdev=38.69 00:12:35.995 lat (msec): min=4, max=275, avg=20.34, stdev=38.70 00:12:35.995 clat percentiles (msec): 00:12:35.995 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 7], 00:12:35.995 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 11], 00:12:35.995 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 29], 95.00th=[ 69], 00:12:35.995 | 99.00th=[ 232], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 275], 00:12:35.995 | 99.99th=[ 275] 00:12:35.995 write: IOPS=79, BW=9.95MiB/s (10.4MB/s)(83.6MiB/8404msec); 0 zone resets 00:12:35.995 slat (usec): min=30, max=2554, avg=168.10, stdev=279.41 00:12:35.995 clat (msec): min=37, max=308, avg=99.71, stdev=45.33 00:12:35.995 lat (msec): min=38, max=309, avg=99.88, stdev=45.34 00:12:35.995 clat percentiles (msec): 00:12:35.995 | 1.00th=[ 43], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 63], 00:12:35.995 | 30.00th=[ 69], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 101], 00:12:35.995 | 70.00th=[ 114], 80.00th=[ 134], 90.00th=[ 161], 95.00th=[ 194], 00:12:35.995 | 99.00th=[ 255], 99.50th=[ 275], 99.90th=[ 309], 99.95th=[ 309], 00:12:35.995 | 99.99th=[ 309] 00:12:35.995 bw ( KiB/s): min= 2304, max=16128, per=0.72%, avg=8915.11, stdev=4045.33, samples=19 00:12:35.995 iops : min= 18, max= 126, avg=69.47, stdev=31.54, samples=19 00:12:35.995 lat (msec) : 10=25.44%, 20=14.82%, 50=6.19%, 100=31.32%, 250=21.16% 00:12:35.995 lat (msec) : 500=1.07% 00:12:35.995 cpu : usr=0.44%, sys=0.30%, ctx=2210, majf=0, minf=3 00:12:35.995 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.995 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.995 issued rwts: total=640,669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.995 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.995 job48: (groupid=0, jobs=1): err= 0: pid=81826: Tue Jul 23 05:03:36 2024 00:12:35.995 read: IOPS=76, BW=9824KiB/s (10.1MB/s)(80.0MiB/8339msec) 00:12:35.995 slat (usec): min=6, max=2967, avg=70.61, stdev=176.00 00:12:35.995 clat (usec): min=6748, max=69388, avg=13248.82, stdev=7699.46 00:12:35.995 lat (usec): min=6788, max=69395, avg=13319.43, stdev=7686.77 00:12:35.995 clat percentiles (usec): 00:12:35.995 | 1.00th=[ 6915], 5.00th=[ 7373], 10.00th=[ 7635], 20.00th=[ 8291], 00:12:35.995 | 30.00th=[ 9372], 40.00th=[10552], 50.00th=[11338], 60.00th=[12649], 00:12:35.995 | 70.00th=[13960], 80.00th=[15533], 90.00th=[19268], 95.00th=[24511], 00:12:35.995 | 99.00th=[46924], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:12:35.995 | 99.99th=[69731] 00:12:35.995 write: IOPS=87, BW=10.9MiB/s (11.4MB/s)(97.5MiB/8958msec); 0 zone resets 00:12:35.995 slat (usec): min=30, max=4259, avg=164.22, stdev=316.46 00:12:35.995 clat (msec): min=39, max=294, avg=91.03, stdev=41.56 00:12:35.995 lat (msec): min=39, max=294, avg=91.19, stdev=41.57 00:12:35.995 clat percentiles (msec): 00:12:35.995 | 1.00th=[ 53], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 61], 00:12:35.995 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 82], 00:12:35.995 | 70.00th=[ 96], 80.00th=[ 122], 90.00th=[ 146], 95.00th=[ 180], 00:12:35.995 | 99.00th=[ 255], 99.50th=[ 266], 99.90th=[ 296], 99.95th=[ 296], 00:12:35.995 | 99.99th=[ 296] 00:12:35.995 bw ( KiB/s): min= 2304, max=15339, per=0.80%, avg=9955.05, stdev=4604.24, samples=19 00:12:35.995 iops : min= 18, max= 119, avg=77.58, stdev=35.99, samples=19 00:12:35.995 lat (msec) : 10=15.07%, 20=25.92%, 50=3.87%, 100=39.23%, 250=15.28% 00:12:35.995 lat (msec) : 500=0.63% 00:12:35.995 cpu : usr=0.50%, sys=0.30%, ctx=2476, majf=0, minf=5 00:12:35.995 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.995 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.995 issued rwts: total=640,780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.995 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.995 job49: (groupid=0, jobs=1): err= 0: pid=81827: Tue Jul 23 05:03:36 2024 00:12:35.995 read: IOPS=80, BW=10.0MiB/s (10.5MB/s)(80.0MiB/7976msec) 00:12:35.995 slat (usec): min=6, max=1456, avg=68.38, stdev=143.09 00:12:35.995 clat (msec): min=4, max=187, avg=18.04, stdev=24.19 00:12:35.995 lat (msec): min=4, max=187, avg=18.10, stdev=24.19 00:12:35.995 clat percentiles (msec): 00:12:35.995 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:12:35.995 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 14], 00:12:35.995 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 30], 95.00th=[ 50], 00:12:35.995 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 188], 00:12:35.995 | 99.99th=[ 188] 00:12:35.995 write: IOPS=77, BW=9914KiB/s (10.2MB/s)(83.0MiB/8573msec); 0 zone resets 00:12:35.995 slat (usec): min=38, max=2161, avg=140.88, stdev=201.74 00:12:35.995 clat (msec): min=32, max=283, avg=102.62, stdev=43.85 00:12:35.995 lat (msec): min=32, max=283, avg=102.76, stdev=43.85 00:12:35.995 clat percentiles (msec): 00:12:35.995 | 1.00th=[ 54], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 68], 00:12:35.995 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 90], 60.00th=[ 102], 00:12:35.995 | 70.00th=[ 114], 80.00th=[ 133], 90.00th=[ 163], 95.00th=[ 188], 00:12:35.995 | 99.00th=[ 257], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 284], 00:12:35.995 | 99.99th=[ 284] 00:12:35.995 bw ( KiB/s): min= 4854, max=15616, per=0.73%, avg=9099.89, stdev=3388.70, samples=18 00:12:35.995 iops : min= 37, max= 122, avg=70.94, stdev=26.46, samples=18 00:12:35.995 lat (msec) : 10=17.56%, 20=23.62%, 50=6.06%, 100=30.83%, 250=21.40% 00:12:35.995 lat (msec) : 500=0.54% 00:12:35.995 cpu : usr=0.42%, sys=0.31%, ctx=2318, majf=0, minf=9 00:12:35.995 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.995 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.995 issued rwts: total=640,664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.995 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.995 job50: (groupid=0, jobs=1): err= 0: pid=81828: Tue Jul 23 05:03:36 2024 00:12:35.995 read: IOPS=124, BW=15.5MiB/s (16.3MB/s)(140MiB/9014msec) 00:12:35.995 slat (usec): min=6, max=1664, avg=54.24, stdev=122.54 00:12:35.995 clat (usec): min=3197, max=42068, avg=8297.75, stdev=4416.35 00:12:35.995 lat (usec): min=3217, max=42080, avg=8351.99, stdev=4415.51 00:12:35.995 clat percentiles (usec): 00:12:35.995 | 1.00th=[ 3359], 5.00th=[ 4293], 10.00th=[ 5014], 20.00th=[ 5669], 00:12:35.995 | 30.00th=[ 6128], 40.00th=[ 6521], 50.00th=[ 6980], 60.00th=[ 7635], 00:12:35.995 | 70.00th=[ 8586], 80.00th=[10028], 90.00th=[12649], 95.00th=[15926], 00:12:35.995 | 99.00th=[26870], 99.50th=[33424], 99.90th=[42206], 99.95th=[42206], 00:12:35.995 | 99.99th=[42206] 00:12:35.996 write: IOPS=126, BW=15.9MiB/s (16.6MB/s)(140MiB/8840msec); 0 zone resets 00:12:35.996 slat (usec): min=31, max=1924, avg=131.83, stdev=177.77 00:12:35.996 clat (msec): min=20, max=172, avg=62.28, stdev=23.56 00:12:35.996 lat (msec): min=20, max=172, avg=62.41, stdev=23.55 00:12:35.996 clat percentiles (msec): 00:12:35.996 | 1.00th=[ 29], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 45], 00:12:35.996 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 63], 00:12:35.996 | 70.00th=[ 68], 80.00th=[ 75], 90.00th=[ 94], 95.00th=[ 112], 00:12:35.996 | 99.00th=[ 148], 99.50th=[ 161], 99.90th=[ 171], 99.95th=[ 174], 00:12:35.996 | 99.99th=[ 174] 00:12:35.996 bw ( KiB/s): min= 7680, max=24320, per=1.15%, avg=14336.00, stdev=4474.32, samples=20 00:12:35.996 iops : min= 60, max= 190, avg=112.00, stdev=34.96, samples=20 00:12:35.996 lat (msec) : 4=2.01%, 10=37.88%, 20=8.57%, 50=19.46%, 100=28.34% 00:12:35.996 lat (msec) : 250=3.75% 00:12:35.996 cpu : usr=0.77%, sys=0.45%, ctx=3674, majf=0, minf=3 00:12:35.996 IO depths : 1=0.7%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.996 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.996 issued rwts: total=1120,1121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.996 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.996 job51: (groupid=0, jobs=1): err= 0: pid=81829: Tue Jul 23 05:03:36 2024 00:12:35.996 read: IOPS=107, BW=13.4MiB/s (14.1MB/s)(120MiB/8953msec) 00:12:35.996 slat (usec): min=7, max=1384, avg=72.27, stdev=146.65 00:12:35.996 clat (usec): min=2418, max=83046, avg=9267.82, stdev=8805.77 00:12:35.996 lat (usec): min=2478, max=83134, avg=9340.09, stdev=8808.12 00:12:35.996 clat percentiles (usec): 00:12:35.996 | 1.00th=[ 3326], 5.00th=[ 3720], 10.00th=[ 4015], 20.00th=[ 4883], 00:12:35.996 | 30.00th=[ 5866], 40.00th=[ 6456], 50.00th=[ 7242], 60.00th=[ 7963], 00:12:35.996 | 70.00th=[ 9110], 80.00th=[10421], 90.00th=[14353], 95.00th=[20055], 00:12:35.996 | 99.00th=[57410], 99.50th=[63177], 99.90th=[83362], 99.95th=[83362], 00:12:35.996 | 99.99th=[83362] 00:12:35.996 write: IOPS=117, BW=14.7MiB/s (15.5MB/s)(131MiB/8885msec); 0 zone resets 00:12:35.996 slat (usec): min=38, max=3041, avg=147.56, stdev=242.77 00:12:35.996 clat (msec): min=29, max=236, avg=67.30, stdev=28.13 00:12:35.996 lat (msec): min=29, max=237, avg=67.44, stdev=28.14 00:12:35.996 clat percentiles (msec): 00:12:35.996 | 1.00th=[ 34], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 45], 00:12:35.996 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 68], 00:12:35.996 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 102], 95.00th=[ 125], 00:12:35.996 | 99.00th=[ 167], 99.50th=[ 203], 99.90th=[ 236], 99.95th=[ 236], 00:12:35.996 | 99.99th=[ 236] 00:12:35.996 bw ( KiB/s): min= 4864, max=23040, per=1.07%, avg=13318.25, stdev=5113.74, samples=20 00:12:35.996 iops : min= 38, max= 180, avg=103.90, stdev=39.87, samples=20 00:12:35.996 lat (msec) : 4=4.73%, 10=32.57%, 20=8.12%, 50=17.88%, 100=31.27% 00:12:35.996 lat (msec) : 250=5.43% 00:12:35.996 cpu : usr=0.81%, sys=0.32%, ctx=3394, majf=0, minf=1 00:12:35.996 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.996 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.996 issued rwts: total=960,1048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.996 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.996 job52: (groupid=0, jobs=1): err= 0: pid=81830: Tue Jul 23 05:03:36 2024 00:12:35.996 read: IOPS=121, BW=15.1MiB/s (15.9MB/s)(136MiB/8970msec) 00:12:35.996 slat (usec): min=6, max=2371, avg=56.17, stdev=137.71 00:12:35.996 clat (usec): min=3316, max=71917, avg=8789.30, stdev=6245.65 00:12:35.996 lat (usec): min=3343, max=71926, avg=8845.47, stdev=6249.07 00:12:35.996 clat percentiles (usec): 00:12:35.996 | 1.00th=[ 3818], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5407], 00:12:35.996 | 30.00th=[ 5997], 40.00th=[ 6521], 50.00th=[ 7308], 60.00th=[ 7963], 00:12:35.996 | 70.00th=[ 9241], 80.00th=[10945], 90.00th=[13304], 95.00th=[16712], 00:12:35.996 | 99.00th=[27395], 99.50th=[61604], 99.90th=[71828], 99.95th=[71828], 00:12:35.996 | 99.99th=[71828] 00:12:35.996 write: IOPS=127, BW=15.9MiB/s (16.7MB/s)(140MiB/8789msec); 0 zone resets 00:12:35.996 slat (usec): min=38, max=2952, avg=140.17, stdev=253.12 00:12:35.996 clat (msec): min=12, max=176, avg=62.20, stdev=23.60 00:12:35.996 lat (msec): min=12, max=176, avg=62.34, stdev=23.59 00:12:35.996 clat percentiles (msec): 00:12:35.996 | 1.00th=[ 17], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 45], 00:12:35.996 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 63], 00:12:35.996 | 70.00th=[ 68], 80.00th=[ 79], 90.00th=[ 92], 95.00th=[ 108], 00:12:35.996 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 165], 99.95th=[ 178], 00:12:35.996 | 99.99th=[ 178] 00:12:35.996 bw ( KiB/s): min= 6400, max=25138, per=1.17%, avg=14497.11, stdev=5046.20, samples=19 00:12:35.996 iops : min= 50, max= 196, avg=113.16, stdev=39.37, samples=19 00:12:35.996 lat (msec) : 4=0.72%, 10=36.43%, 20=11.28%, 50=17.76%, 100=30.09% 00:12:35.996 lat (msec) : 250=3.72% 00:12:35.996 cpu : usr=0.72%, sys=0.47%, ctx=3642, majf=0, minf=1 00:12:35.996 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.996 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.996 issued rwts: total=1087,1120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.996 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.996 job53: (groupid=0, jobs=1): err= 0: pid=81831: Tue Jul 23 05:03:36 2024 00:12:35.996 read: IOPS=120, BW=15.0MiB/s (15.7MB/s)(140MiB/9329msec) 00:12:35.996 slat (usec): min=6, max=2434, avg=59.74, stdev=139.95 00:12:35.996 clat (usec): min=3175, max=92283, avg=9601.05, stdev=8252.47 00:12:35.996 lat (usec): min=3197, max=92297, avg=9660.79, stdev=8250.37 00:12:35.996 clat percentiles (usec): 00:12:35.996 | 1.00th=[ 4228], 5.00th=[ 4883], 10.00th=[ 5211], 20.00th=[ 5800], 00:12:35.996 | 30.00th=[ 6390], 40.00th=[ 6849], 50.00th=[ 7373], 60.00th=[ 8586], 00:12:35.996 | 70.00th=[ 9634], 80.00th=[11338], 90.00th=[14353], 95.00th=[19530], 00:12:35.996 | 99.00th=[39584], 99.50th=[82314], 99.90th=[90702], 99.95th=[92799], 00:12:35.996 | 99.99th=[92799] 00:12:35.996 write: IOPS=130, BW=16.3MiB/s (17.1MB/s)(142MiB/8690msec); 0 zone resets 00:12:35.996 slat (usec): min=35, max=6642, avg=162.11, stdev=401.63 00:12:35.996 clat (msec): min=4, max=178, avg=60.71, stdev=22.69 00:12:35.996 lat (msec): min=4, max=178, avg=60.87, stdev=22.66 00:12:35.996 clat percentiles (msec): 00:12:35.996 | 1.00th=[ 8], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 45], 00:12:35.996 | 30.00th=[ 49], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 62], 00:12:35.996 | 70.00th=[ 67], 80.00th=[ 74], 90.00th=[ 90], 95.00th=[ 107], 00:12:35.996 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 153], 99.95th=[ 180], 00:12:35.996 | 99.99th=[ 180] 00:12:35.996 bw ( KiB/s): min= 8960, max=29696, per=1.16%, avg=14407.80, stdev=5009.22, samples=20 00:12:35.996 iops : min= 70, max= 232, avg=112.40, stdev=39.22, samples=20 00:12:35.996 lat (msec) : 4=0.22%, 10=36.57%, 20=11.90%, 50=17.67%, 100=30.09% 00:12:35.996 lat (msec) : 250=3.55% 00:12:35.996 cpu : usr=0.87%, sys=0.41%, ctx=3705, majf=0, minf=3 00:12:35.996 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.996 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.996 issued rwts: total=1120,1133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.996 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.996 job54: (groupid=0, jobs=1): err= 0: pid=81832: Tue Jul 23 05:03:36 2024 00:12:35.996 read: IOPS=109, BW=13.7MiB/s (14.4MB/s)(120MiB/8741msec) 00:12:35.996 slat (usec): min=6, max=4905, avg=59.81, stdev=199.17 00:12:35.996 clat (usec): min=2728, max=46210, avg=8684.27, stdev=5950.04 00:12:35.996 lat (usec): min=2750, max=46245, avg=8744.08, stdev=5961.58 00:12:35.996 clat percentiles (usec): 00:12:35.996 | 1.00th=[ 3326], 5.00th=[ 3818], 10.00th=[ 4359], 20.00th=[ 4883], 00:12:35.996 | 30.00th=[ 5473], 40.00th=[ 6128], 50.00th=[ 6718], 60.00th=[ 7767], 00:12:35.996 | 70.00th=[ 8979], 80.00th=[10945], 90.00th=[14746], 95.00th=[21365], 00:12:35.996 | 99.00th=[35390], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:12:35.996 | 99.99th=[46400] 00:12:35.996 write: IOPS=124, BW=15.6MiB/s (16.4MB/s)(140MiB/8949msec); 0 zone resets 00:12:35.997 slat (usec): min=38, max=3555, avg=136.53, stdev=227.72 00:12:35.997 clat (msec): min=17, max=183, avg=63.63, stdev=21.51 00:12:35.997 lat (msec): min=17, max=183, avg=63.77, stdev=21.52 00:12:35.997 clat percentiles (msec): 00:12:35.997 | 1.00th=[ 29], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 46], 00:12:35.997 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 66], 00:12:35.997 | 70.00th=[ 71], 80.00th=[ 80], 90.00th=[ 92], 95.00th=[ 103], 00:12:35.997 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 178], 99.95th=[ 184], 00:12:35.997 | 99.99th=[ 184] 00:12:35.997 bw ( KiB/s): min= 6400, max=20736, per=1.14%, avg=14133.63, stdev=4319.64, samples=19 00:12:35.997 iops : min= 50, max= 162, avg=110.32, stdev=33.74, samples=19 00:12:35.997 lat (msec) : 4=2.74%, 10=32.45%, 20=8.43%, 50=17.96%, 100=35.15% 00:12:35.997 lat (msec) : 250=3.27% 00:12:35.997 cpu : usr=0.66%, sys=0.48%, ctx=3526, majf=0, minf=8 00:12:35.997 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.997 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.997 issued rwts: total=960,1117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.997 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.997 job55: (groupid=0, jobs=1): err= 0: pid=81833: Tue Jul 23 05:03:36 2024 00:12:35.997 read: IOPS=111, BW=14.0MiB/s (14.7MB/s)(120MiB/8573msec) 00:12:35.997 slat (usec): min=7, max=1185, avg=65.58, stdev=119.26 00:12:35.997 clat (usec): min=1708, max=139246, avg=13786.70, stdev=18367.70 00:12:35.997 lat (msec): min=2, max=139, avg=13.85, stdev=18.37 00:12:35.997 clat percentiles (msec): 00:12:35.997 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:12:35.997 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:12:35.997 | 70.00th=[ 11], 80.00th=[ 15], 90.00th=[ 27], 95.00th=[ 48], 00:12:35.997 | 99.00th=[ 101], 99.50th=[ 138], 99.90th=[ 140], 99.95th=[ 140], 00:12:35.997 | 99.99th=[ 140] 00:12:35.997 write: IOPS=118, BW=14.9MiB/s (15.6MB/s)(124MiB/8346msec); 0 zone resets 00:12:35.997 slat (usec): min=40, max=3620, avg=164.02, stdev=263.03 00:12:35.997 clat (msec): min=19, max=243, avg=66.77, stdev=26.62 00:12:35.997 lat (msec): min=19, max=243, avg=66.93, stdev=26.61 00:12:35.997 clat percentiles (msec): 00:12:35.997 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 48], 00:12:35.997 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 66], 00:12:35.997 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 99], 95.00th=[ 114], 00:12:35.997 | 99.00th=[ 184], 99.50th=[ 205], 99.90th=[ 243], 99.95th=[ 243], 00:12:35.997 | 99.99th=[ 243] 00:12:35.997 bw ( KiB/s): min= 2560, max=18688, per=1.01%, avg=12591.20, stdev=5369.78, samples=20 00:12:35.997 iops : min= 20, max= 146, avg=98.25, stdev=41.96, samples=20 00:12:35.997 lat (msec) : 2=0.05%, 4=1.95%, 10=30.17%, 20=10.66%, 50=17.32% 00:12:35.997 lat (msec) : 100=34.99%, 250=4.87% 00:12:35.997 cpu : usr=0.60%, sys=0.51%, ctx=3452, majf=0, minf=7 00:12:35.997 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.997 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.997 issued rwts: total=960,992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.997 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.997 job56: (groupid=0, jobs=1): err= 0: pid=81834: Tue Jul 23 05:03:36 2024 00:12:35.997 read: IOPS=111, BW=13.9MiB/s (14.6MB/s)(120MiB/8644msec) 00:12:35.997 slat (usec): min=6, max=2094, avg=59.24, stdev=140.74 00:12:35.997 clat (usec): min=2869, max=59303, avg=10221.48, stdev=7652.81 00:12:35.997 lat (usec): min=3019, max=59322, avg=10280.72, stdev=7644.81 00:12:35.997 clat percentiles (usec): 00:12:35.997 | 1.00th=[ 3654], 5.00th=[ 4047], 10.00th=[ 4490], 20.00th=[ 5342], 00:12:35.997 | 30.00th=[ 6521], 40.00th=[ 7504], 50.00th=[ 8094], 60.00th=[ 9765], 00:12:35.997 | 70.00th=[10945], 80.00th=[12256], 90.00th=[15664], 95.00th=[22152], 00:12:35.997 | 99.00th=[49546], 99.50th=[56361], 99.90th=[59507], 99.95th=[59507], 00:12:35.997 | 99.99th=[59507] 00:12:35.997 write: IOPS=116, BW=14.5MiB/s (15.2MB/s)(127MiB/8773msec); 0 zone resets 00:12:35.997 slat (usec): min=36, max=4918, avg=158.85, stdev=308.05 00:12:35.997 clat (msec): min=27, max=286, avg=68.37, stdev=27.50 00:12:35.997 lat (msec): min=27, max=286, avg=68.53, stdev=27.51 00:12:35.997 clat percentiles (msec): 00:12:35.997 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 48], 00:12:35.997 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 69], 00:12:35.997 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 116], 00:12:35.997 | 99.00th=[ 167], 99.50th=[ 213], 99.90th=[ 264], 99.95th=[ 288], 00:12:35.997 | 99.99th=[ 288] 00:12:35.997 bw ( KiB/s): min= 5888, max=19456, per=1.04%, avg=12955.30, stdev=3928.00, samples=20 00:12:35.997 iops : min= 46, max= 152, avg=101.20, stdev=30.67, samples=20 00:12:35.997 lat (msec) : 4=1.97%, 10=28.04%, 20=15.51%, 50=14.75%, 100=34.87% 00:12:35.997 lat (msec) : 250=4.70%, 500=0.15% 00:12:35.997 cpu : usr=0.68%, sys=0.40%, ctx=3382, majf=0, minf=1 00:12:35.997 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.997 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.997 issued rwts: total=960,1019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.997 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.997 job57: (groupid=0, jobs=1): err= 0: pid=81835: Tue Jul 23 05:03:36 2024 00:12:35.997 read: IOPS=120, BW=15.1MiB/s (15.9MB/s)(140MiB/9259msec) 00:12:35.997 slat (usec): min=6, max=4638, avg=61.20, stdev=205.80 00:12:35.997 clat (usec): min=2589, max=86794, avg=11497.01, stdev=10533.33 00:12:35.997 lat (usec): min=2618, max=86801, avg=11558.20, stdev=10530.89 00:12:35.997 clat percentiles (usec): 00:12:35.997 | 1.00th=[ 4293], 5.00th=[ 5014], 10.00th=[ 5407], 20.00th=[ 6063], 00:12:35.997 | 30.00th=[ 6521], 40.00th=[ 7308], 50.00th=[ 8586], 60.00th=[ 9765], 00:12:35.997 | 70.00th=[11469], 80.00th=[13042], 90.00th=[19006], 95.00th=[27919], 00:12:35.997 | 99.00th=[70779], 99.50th=[76022], 99.90th=[86508], 99.95th=[86508], 00:12:35.997 | 99.99th=[86508] 00:12:35.997 write: IOPS=133, BW=16.7MiB/s (17.5MB/s)(140MiB/8369msec); 0 zone resets 00:12:35.997 slat (usec): min=37, max=2438, avg=133.71, stdev=192.75 00:12:35.997 clat (msec): min=8, max=167, avg=59.19, stdev=22.11 00:12:35.997 lat (msec): min=8, max=167, avg=59.32, stdev=22.13 00:12:35.997 clat percentiles (msec): 00:12:35.997 | 1.00th=[ 14], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 44], 00:12:35.997 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 61], 00:12:35.997 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 99], 00:12:35.997 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 169], 00:12:35.997 | 99.99th=[ 169] 00:12:35.997 bw ( KiB/s): min= 6656, max=28672, per=1.19%, avg=14758.58, stdev=5433.26, samples=19 00:12:35.997 iops : min= 52, max= 224, avg=115.11, stdev=42.41, samples=19 00:12:35.997 lat (msec) : 4=0.13%, 10=30.54%, 20=16.25%, 50=19.82%, 100=30.98% 00:12:35.997 lat (msec) : 250=2.28% 00:12:35.997 cpu : usr=0.74%, sys=0.50%, ctx=3690, majf=0, minf=1 00:12:35.997 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.997 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.997 issued rwts: total=1120,1120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.998 job58: (groupid=0, jobs=1): err= 0: pid=81836: Tue Jul 23 05:03:36 2024 00:12:35.998 read: IOPS=108, BW=13.6MiB/s (14.3MB/s)(120MiB/8824msec) 00:12:35.998 slat (usec): min=5, max=2231, avg=65.45, stdev=156.71 00:12:35.998 clat (usec): min=2189, max=95802, avg=9627.94, stdev=8420.66 00:12:35.998 lat (usec): min=3460, max=95897, avg=9693.40, stdev=8413.66 00:12:35.998 clat percentiles (usec): 00:12:35.998 | 1.00th=[ 3916], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5800], 00:12:35.998 | 30.00th=[ 6521], 40.00th=[ 7177], 50.00th=[ 7963], 60.00th=[ 8717], 00:12:35.998 | 70.00th=[10159], 80.00th=[11731], 90.00th=[14615], 95.00th=[17433], 00:12:35.998 | 99.00th=[28181], 99.50th=[91751], 99.90th=[95945], 99.95th=[95945], 00:12:35.998 | 99.99th=[95945] 00:12:35.998 write: IOPS=123, BW=15.4MiB/s (16.2MB/s)(136MiB/8843msec); 0 zone resets 00:12:35.998 slat (usec): min=30, max=2142, avg=137.59, stdev=189.00 00:12:35.998 clat (msec): min=27, max=173, avg=64.32, stdev=21.69 00:12:35.998 lat (msec): min=27, max=174, avg=64.46, stdev=21.71 00:12:35.998 clat percentiles (msec): 00:12:35.998 | 1.00th=[ 33], 5.00th=[ 38], 10.00th=[ 40], 20.00th=[ 45], 00:12:35.998 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 67], 00:12:35.998 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 93], 95.00th=[ 103], 00:12:35.998 | 99.00th=[ 133], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 174], 00:12:35.998 | 99.99th=[ 174] 00:12:35.998 bw ( KiB/s): min= 6912, max=21248, per=1.12%, avg=13869.70, stdev=4559.19, samples=20 00:12:35.998 iops : min= 54, max= 166, avg=108.20, stdev=35.64, samples=20 00:12:35.998 lat (msec) : 4=0.78%, 10=31.35%, 20=13.60%, 50=16.92%, 100=34.03% 00:12:35.998 lat (msec) : 250=3.32% 00:12:35.998 cpu : usr=0.68%, sys=0.42%, ctx=3467, majf=0, minf=5 00:12:35.998 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.998 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.998 issued rwts: total=960,1091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.998 job59: (groupid=0, jobs=1): err= 0: pid=81837: Tue Jul 23 05:03:36 2024 00:12:35.998 read: IOPS=104, BW=13.0MiB/s (13.7MB/s)(120MiB/9210msec) 00:12:35.998 slat (usec): min=6, max=2106, avg=57.40, stdev=137.78 00:12:35.998 clat (msec): min=3, max=111, avg= 9.73, stdev=10.77 00:12:35.998 lat (msec): min=3, max=111, avg= 9.78, stdev=10.77 00:12:35.998 clat percentiles (msec): 00:12:35.998 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:12:35.998 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 8], 60.00th=[ 8], 00:12:35.998 | 70.00th=[ 9], 80.00th=[ 11], 90.00th=[ 15], 95.00th=[ 20], 00:12:35.998 | 99.00th=[ 63], 99.50th=[ 106], 99.90th=[ 112], 99.95th=[ 112], 00:12:35.998 | 99.99th=[ 112] 00:12:35.998 write: IOPS=122, BW=15.3MiB/s (16.0MB/s)(135MiB/8859msec); 0 zone resets 00:12:35.998 slat (usec): min=37, max=2577, avg=138.71, stdev=194.62 00:12:35.998 clat (msec): min=4, max=249, avg=64.94, stdev=26.99 00:12:35.998 lat (msec): min=4, max=249, avg=65.07, stdev=26.99 00:12:35.998 clat percentiles (msec): 00:12:35.998 | 1.00th=[ 16], 5.00th=[ 38], 10.00th=[ 40], 20.00th=[ 45], 00:12:35.998 | 30.00th=[ 49], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 66], 00:12:35.998 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 101], 95.00th=[ 116], 00:12:35.998 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 209], 99.95th=[ 249], 00:12:35.998 | 99.99th=[ 249] 00:12:35.998 bw ( KiB/s): min= 8192, max=26880, per=1.11%, avg=13759.70, stdev=4546.88, samples=20 00:12:35.998 iops : min= 64, max= 210, avg=107.40, stdev=35.59, samples=20 00:12:35.998 lat (msec) : 4=0.64%, 10=35.60%, 20=9.70%, 50=17.92%, 100=30.71% 00:12:35.998 lat (msec) : 250=5.44% 00:12:35.998 cpu : usr=0.76%, sys=0.36%, ctx=3365, majf=0, minf=3 00:12:35.998 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.998 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.998 issued rwts: total=960,1082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.998 job60: (groupid=0, jobs=1): err= 0: pid=81838: Tue Jul 23 05:03:36 2024 00:12:35.998 read: IOPS=110, BW=13.8MiB/s (14.5MB/s)(120MiB/8664msec) 00:12:35.998 slat (usec): min=5, max=1291, avg=58.62, stdev=116.78 00:12:35.998 clat (msec): min=2, max=187, avg=11.18, stdev=16.12 00:12:35.998 lat (msec): min=2, max=187, avg=11.24, stdev=16.12 00:12:35.998 clat percentiles (msec): 00:12:35.998 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:12:35.998 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:12:35.998 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 17], 95.00th=[ 24], 00:12:35.998 | 99.00th=[ 95], 99.50th=[ 101], 99.90th=[ 188], 99.95th=[ 188], 00:12:35.998 | 99.99th=[ 188] 00:12:35.998 write: IOPS=119, BW=14.9MiB/s (15.6MB/s)(129MiB/8653msec); 0 zone resets 00:12:35.998 slat (usec): min=29, max=3473, avg=158.91, stdev=265.45 00:12:35.998 clat (msec): min=29, max=160, avg=66.57, stdev=20.88 00:12:35.998 lat (msec): min=29, max=160, avg=66.72, stdev=20.88 00:12:35.998 clat percentiles (msec): 00:12:35.998 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 49], 00:12:35.998 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 68], 00:12:35.998 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 108], 00:12:35.998 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 146], 99.95th=[ 161], 00:12:35.998 | 99.99th=[ 161] 00:12:35.998 bw ( KiB/s): min= 4096, max=18468, per=1.04%, avg=12933.84, stdev=4246.59, samples=19 00:12:35.998 iops : min= 32, max= 144, avg=100.89, stdev=33.12, samples=19 00:12:35.998 lat (msec) : 4=1.26%, 10=35.49%, 20=8.28%, 50=13.55%, 100=37.35% 00:12:35.998 lat (msec) : 250=4.07% 00:12:35.998 cpu : usr=0.79%, sys=0.32%, ctx=3323, majf=0, minf=3 00:12:35.998 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.998 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.998 issued rwts: total=960,1032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.998 job61: (groupid=0, jobs=1): err= 0: pid=81839: Tue Jul 23 05:03:36 2024 00:12:35.998 read: IOPS=124, BW=15.6MiB/s (16.4MB/s)(140MiB/8968msec) 00:12:35.998 slat (usec): min=5, max=1290, avg=49.25, stdev=97.16 00:12:35.998 clat (usec): min=2620, max=39136, avg=8773.80, stdev=4907.74 00:12:35.998 lat (usec): min=2676, max=39149, avg=8823.05, stdev=4907.82 00:12:35.998 clat percentiles (usec): 00:12:35.998 | 1.00th=[ 3425], 5.00th=[ 4146], 10.00th=[ 4490], 20.00th=[ 5014], 00:12:35.998 | 30.00th=[ 5604], 40.00th=[ 6718], 50.00th=[ 7832], 60.00th=[ 8586], 00:12:35.998 | 70.00th=[ 9503], 80.00th=[11076], 90.00th=[14222], 95.00th=[18482], 00:12:35.998 | 99.00th=[27395], 99.50th=[30802], 99.90th=[39060], 99.95th=[39060], 00:12:35.998 | 99.99th=[39060] 00:12:35.998 write: IOPS=128, BW=16.1MiB/s (16.9MB/s)(141MiB/8776msec); 0 zone resets 00:12:35.998 slat (usec): min=35, max=4032, avg=136.95, stdev=234.84 00:12:35.998 clat (msec): min=17, max=179, avg=61.49, stdev=24.01 00:12:35.998 lat (msec): min=17, max=179, avg=61.63, stdev=24.01 00:12:35.998 clat percentiles (msec): 00:12:35.998 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 44], 00:12:35.998 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 61], 00:12:35.998 | 70.00th=[ 68], 80.00th=[ 75], 90.00th=[ 92], 95.00th=[ 112], 00:12:35.998 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 180], 00:12:35.998 | 99.99th=[ 180] 00:12:35.998 bw ( KiB/s): min= 5632, max=24112, per=1.16%, avg=14384.05, stdev=5705.10, samples=20 00:12:35.998 iops : min= 44, max= 188, avg=112.20, stdev=44.58, samples=20 00:12:35.998 lat (msec) : 4=1.64%, 10=35.14%, 20=11.20%, 50=22.21%, 100=26.03% 00:12:35.998 lat (msec) : 250=3.78% 00:12:35.998 cpu : usr=0.78%, sys=0.45%, ctx=3703, majf=0, minf=7 00:12:35.998 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.998 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.998 issued rwts: total=1120,1131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.998 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.998 job62: (groupid=0, jobs=1): err= 0: pid=81840: Tue Jul 23 05:03:36 2024 00:12:35.998 read: IOPS=118, BW=14.8MiB/s (15.6MB/s)(140MiB/9431msec) 00:12:35.998 slat (usec): min=6, max=1298, avg=70.29, stdev=141.22 00:12:35.998 clat (msec): min=3, max=115, avg=12.12, stdev=13.00 00:12:35.998 lat (msec): min=3, max=115, avg=12.19, stdev=13.01 00:12:35.998 clat percentiles (msec): 00:12:35.998 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:12:35.999 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:12:35.999 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 19], 95.00th=[ 27], 00:12:35.999 | 99.00th=[ 93], 99.50th=[ 101], 99.90th=[ 113], 99.95th=[ 115], 00:12:35.999 | 99.99th=[ 115] 00:12:35.999 write: IOPS=138, BW=17.3MiB/s (18.1MB/s)(144MiB/8337msec); 0 zone resets 00:12:35.999 slat (usec): min=38, max=5049, avg=137.55, stdev=265.71 00:12:35.999 clat (usec): min=1173, max=189329, avg=57244.80, stdev=24060.00 00:12:35.999 lat (usec): min=1244, max=189387, avg=57382.35, stdev=24061.60 00:12:35.999 clat percentiles (msec): 00:12:35.999 | 1.00th=[ 4], 5.00th=[ 19], 10.00th=[ 39], 20.00th=[ 42], 00:12:35.999 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 54], 60.00th=[ 58], 00:12:35.999 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 104], 00:12:35.999 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 186], 99.95th=[ 190], 00:12:35.999 | 99.99th=[ 190] 00:12:35.999 bw ( KiB/s): min= 5632, max=35328, per=1.18%, avg=14663.25, stdev=7306.98, samples=20 00:12:35.999 iops : min= 44, max= 276, avg=114.40, stdev=57.13, samples=20 00:12:35.999 lat (msec) : 2=0.04%, 4=0.66%, 10=31.63%, 20=15.13%, 50=20.90% 00:12:35.999 lat (msec) : 100=28.42%, 250=3.21% 00:12:35.999 cpu : usr=0.79%, sys=0.49%, ctx=3754, majf=0, minf=5 00:12:35.999 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.999 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.999 issued rwts: total=1120,1153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.999 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.999 job63: (groupid=0, jobs=1): err= 0: pid=81841: Tue Jul 23 05:03:36 2024 00:12:35.999 read: IOPS=119, BW=14.9MiB/s (15.6MB/s)(133MiB/8930msec) 00:12:35.999 slat (usec): min=5, max=1141, avg=49.05, stdev=101.67 00:12:35.999 clat (usec): min=2927, max=88943, avg=9700.51, stdev=8291.70 00:12:35.999 lat (usec): min=3060, max=88992, avg=9749.56, stdev=8294.48 00:12:35.999 clat percentiles (usec): 00:12:35.999 | 1.00th=[ 3589], 5.00th=[ 4228], 10.00th=[ 5014], 20.00th=[ 6063], 00:12:35.999 | 30.00th=[ 6652], 40.00th=[ 7308], 50.00th=[ 8029], 60.00th=[ 8717], 00:12:35.999 | 70.00th=[ 9241], 80.00th=[10814], 90.00th=[13435], 95.00th=[17695], 00:12:35.999 | 99.00th=[52167], 99.50th=[63701], 99.90th=[79168], 99.95th=[88605], 00:12:35.999 | 99.99th=[88605] 00:12:35.999 write: IOPS=128, BW=16.1MiB/s (16.9MB/s)(140MiB/8693msec); 0 zone resets 00:12:35.999 slat (usec): min=30, max=3271, avg=145.05, stdev=232.83 00:12:35.999 clat (msec): min=11, max=196, avg=61.55, stdev=22.87 00:12:35.999 lat (msec): min=11, max=196, avg=61.70, stdev=22.87 00:12:35.999 clat percentiles (msec): 00:12:35.999 | 1.00th=[ 30], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 44], 00:12:35.999 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 62], 00:12:35.999 | 70.00th=[ 68], 80.00th=[ 77], 90.00th=[ 91], 95.00th=[ 102], 00:12:35.999 | 99.00th=[ 146], 99.50th=[ 174], 99.90th=[ 192], 99.95th=[ 197], 00:12:35.999 | 99.99th=[ 197] 00:12:35.999 bw ( KiB/s): min= 4608, max=23808, per=1.17%, avg=14524.63, stdev=5431.14, samples=19 00:12:35.999 iops : min= 36, max= 186, avg=113.47, stdev=42.43, samples=19 00:12:35.999 lat (msec) : 4=1.19%, 10=35.56%, 20=10.25%, 50=19.04%, 100=31.03% 00:12:35.999 lat (msec) : 250=2.93% 00:12:35.999 cpu : usr=0.60%, sys=0.59%, ctx=3563, majf=0, minf=3 00:12:35.999 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.999 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.999 issued rwts: total=1065,1120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.999 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.999 job64: (groupid=0, jobs=1): err= 0: pid=81842: Tue Jul 23 05:03:36 2024 00:12:35.999 read: IOPS=108, BW=13.6MiB/s (14.2MB/s)(120MiB/8837msec) 00:12:35.999 slat (usec): min=6, max=2071, avg=72.62, stdev=192.90 00:12:35.999 clat (msec): min=2, max=186, avg=13.85, stdev=17.81 00:12:35.999 lat (msec): min=2, max=186, avg=13.92, stdev=17.80 00:12:35.999 clat percentiles (msec): 00:12:35.999 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:12:35.999 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 11], 00:12:35.999 | 70.00th=[ 12], 80.00th=[ 15], 90.00th=[ 21], 95.00th=[ 47], 00:12:35.999 | 99.00th=[ 78], 99.50th=[ 133], 99.90th=[ 186], 99.95th=[ 186], 00:12:35.999 | 99.99th=[ 186] 00:12:35.999 write: IOPS=121, BW=15.1MiB/s (15.9MB/s)(126MiB/8344msec); 0 zone resets 00:12:35.999 slat (usec): min=33, max=5697, avg=150.84, stdev=290.77 00:12:35.999 clat (msec): min=25, max=225, avg=65.46, stdev=25.38 00:12:35.999 lat (msec): min=25, max=225, avg=65.61, stdev=25.39 00:12:35.999 clat percentiles (msec): 00:12:35.999 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 47], 00:12:35.999 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:12:35.999 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 94], 95.00th=[ 117], 00:12:35.999 | 99.00th=[ 163], 99.50th=[ 182], 99.90th=[ 203], 99.95th=[ 226], 00:12:35.999 | 99.99th=[ 226] 00:12:35.999 bw ( KiB/s): min= 3840, max=20992, per=1.03%, avg=12836.85, stdev=5124.65, samples=20 00:12:35.999 iops : min= 30, max= 164, avg=100.15, stdev=40.11, samples=20 00:12:35.999 lat (msec) : 4=0.56%, 10=27.46%, 20=15.58%, 50=17.36%, 100=34.37% 00:12:35.999 lat (msec) : 250=4.67% 00:12:35.999 cpu : usr=0.64%, sys=0.47%, ctx=3338, majf=0, minf=3 00:12:35.999 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.999 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.999 issued rwts: total=960,1010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.999 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.999 job65: (groupid=0, jobs=1): err= 0: pid=81843: Tue Jul 23 05:03:36 2024 00:12:35.999 read: IOPS=108, BW=13.6MiB/s (14.2MB/s)(120MiB/8838msec) 00:12:35.999 slat (usec): min=6, max=1245, avg=50.47, stdev=94.97 00:12:35.999 clat (usec): min=2624, max=72614, avg=10027.47, stdev=6794.47 00:12:35.999 lat (usec): min=2642, max=72625, avg=10077.94, stdev=6793.37 00:12:35.999 clat percentiles (usec): 00:12:35.999 | 1.00th=[ 3752], 5.00th=[ 4686], 10.00th=[ 5342], 20.00th=[ 6456], 00:12:35.999 | 30.00th=[ 7373], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9503], 00:12:35.999 | 70.00th=[10683], 80.00th=[11863], 90.00th=[14484], 95.00th=[17171], 00:12:35.999 | 99.00th=[39584], 99.50th=[67634], 99.90th=[72877], 99.95th=[72877], 00:12:35.999 | 99.99th=[72877] 00:12:35.999 write: IOPS=127, BW=15.9MiB/s (16.7MB/s)(140MiB/8800msec); 0 zone resets 00:12:35.999 slat (usec): min=30, max=4762, avg=155.16, stdev=301.46 00:12:35.999 clat (msec): min=30, max=181, avg=62.29, stdev=20.24 00:12:35.999 lat (msec): min=30, max=181, avg=62.45, stdev=20.24 00:12:35.999 clat percentiles (msec): 00:12:35.999 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 45], 00:12:35.999 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 63], 00:12:35.999 | 70.00th=[ 68], 80.00th=[ 79], 90.00th=[ 91], 95.00th=[ 101], 00:12:35.999 | 99.00th=[ 128], 99.50th=[ 150], 99.90th=[ 159], 99.95th=[ 182], 00:12:35.999 | 99.99th=[ 182] 00:12:35.999 bw ( KiB/s): min= 4352, max=21290, per=1.14%, avg=14214.58, stdev=5227.29, samples=19 00:12:35.999 iops : min= 34, max= 166, avg=110.89, stdev=40.78, samples=19 00:12:35.999 lat (msec) : 4=0.72%, 10=28.99%, 20=14.90%, 50=17.84%, 100=34.81% 00:12:35.999 lat (msec) : 250=2.74% 00:12:36.000 cpu : usr=0.74%, sys=0.43%, ctx=3504, majf=0, minf=3 00:12:36.000 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.000 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.000 issued rwts: total=960,1120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.000 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.000 job66: (groupid=0, jobs=1): err= 0: pid=81844: Tue Jul 23 05:03:36 2024 00:12:36.000 read: IOPS=121, BW=15.2MiB/s (16.0MB/s)(140MiB/9185msec) 00:12:36.000 slat (usec): min=5, max=1294, avg=60.35, stdev=122.87 00:12:36.000 clat (msec): min=2, max=164, avg=11.31, stdev=15.13 00:12:36.000 lat (msec): min=2, max=164, avg=11.37, stdev=15.12 00:12:36.000 clat percentiles (msec): 00:12:36.000 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:12:36.000 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:12:36.000 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 16], 95.00th=[ 19], 00:12:36.000 | 99.00th=[ 72], 99.50th=[ 159], 99.90th=[ 165], 99.95th=[ 165], 00:12:36.000 | 99.99th=[ 165] 00:12:36.000 write: IOPS=137, BW=17.2MiB/s (18.1MB/s)(145MiB/8443msec); 0 zone resets 00:12:36.000 slat (usec): min=31, max=2871, avg=133.10, stdev=201.85 00:12:36.000 clat (msec): min=4, max=171, avg=57.49, stdev=19.22 00:12:36.000 lat (msec): min=5, max=171, avg=57.63, stdev=19.23 00:12:36.000 clat percentiles (msec): 00:12:36.000 | 1.00th=[ 21], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 43], 00:12:36.000 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 59], 00:12:36.000 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 84], 95.00th=[ 95], 00:12:36.000 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 169], 99.95th=[ 171], 00:12:36.000 | 99.99th=[ 171] 00:12:36.000 bw ( KiB/s): min= 7680, max=27190, per=1.19%, avg=14797.80, stdev=5591.34, samples=20 00:12:36.000 iops : min= 60, max= 212, avg=115.55, stdev=43.62, samples=20 00:12:36.000 lat (msec) : 4=0.18%, 10=33.42%, 20=13.75%, 50=21.77%, 100=28.56% 00:12:36.000 lat (msec) : 250=2.32% 00:12:36.000 cpu : usr=0.71%, sys=0.54%, ctx=3711, majf=0, minf=3 00:12:36.000 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.000 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.000 issued rwts: total=1120,1163,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.000 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.000 job67: (groupid=0, jobs=1): err= 0: pid=81845: Tue Jul 23 05:03:36 2024 00:12:36.000 read: IOPS=107, BW=13.5MiB/s (14.1MB/s)(120MiB/8893msec) 00:12:36.000 slat (usec): min=6, max=1744, avg=58.97, stdev=131.58 00:12:36.000 clat (msec): min=3, max=104, avg=11.74, stdev=12.60 00:12:36.000 lat (msec): min=3, max=104, avg=11.80, stdev=12.60 00:12:36.000 clat percentiles (msec): 00:12:36.000 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:12:36.000 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:12:36.000 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 18], 95.00th=[ 23], 00:12:36.000 | 99.00th=[ 87], 99.50th=[ 95], 99.90th=[ 105], 99.95th=[ 105], 00:12:36.000 | 99.99th=[ 105] 00:12:36.000 write: IOPS=126, BW=15.8MiB/s (16.5MB/s)(136MiB/8598msec); 0 zone resets 00:12:36.000 slat (usec): min=38, max=3670, avg=145.29, stdev=229.85 00:12:36.000 clat (msec): min=20, max=172, avg=62.90, stdev=20.91 00:12:36.000 lat (msec): min=20, max=172, avg=63.05, stdev=20.92 00:12:36.000 clat percentiles (msec): 00:12:36.000 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 46], 00:12:36.000 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 64], 00:12:36.000 | 70.00th=[ 69], 80.00th=[ 78], 90.00th=[ 89], 95.00th=[ 100], 00:12:36.000 | 99.00th=[ 142], 99.50th=[ 159], 99.90th=[ 165], 99.95th=[ 174], 00:12:36.000 | 99.99th=[ 174] 00:12:36.000 bw ( KiB/s): min= 3072, max=22016, per=1.11%, avg=13780.60, stdev=5717.15, samples=20 00:12:36.000 iops : min= 24, max= 172, avg=107.55, stdev=44.59, samples=20 00:12:36.000 lat (msec) : 4=1.42%, 10=27.20%, 20=15.31%, 50=16.83%, 100=36.79% 00:12:36.000 lat (msec) : 250=2.45% 00:12:36.000 cpu : usr=0.72%, sys=0.43%, ctx=3488, majf=0, minf=1 00:12:36.000 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.000 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.000 issued rwts: total=960,1084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.000 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.000 job68: (groupid=0, jobs=1): err= 0: pid=81846: Tue Jul 23 05:03:36 2024 00:12:36.000 read: IOPS=107, BW=13.5MiB/s (14.1MB/s)(120MiB/8908msec) 00:12:36.000 slat (usec): min=6, max=2732, avg=60.62, stdev=152.31 00:12:36.000 clat (usec): min=2532, max=47623, avg=9670.60, stdev=5520.45 00:12:36.000 lat (usec): min=2783, max=47635, avg=9731.22, stdev=5523.84 00:12:36.000 clat percentiles (usec): 00:12:36.000 | 1.00th=[ 3556], 5.00th=[ 4686], 10.00th=[ 5276], 20.00th=[ 5932], 00:12:36.000 | 30.00th=[ 6456], 40.00th=[ 7373], 50.00th=[ 8586], 60.00th=[ 9503], 00:12:36.000 | 70.00th=[10552], 80.00th=[11994], 90.00th=[15664], 95.00th=[17433], 00:12:36.000 | 99.00th=[36963], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:12:36.000 | 99.99th=[47449] 00:12:36.000 write: IOPS=123, BW=15.4MiB/s (16.2MB/s)(137MiB/8839msec); 0 zone resets 00:12:36.000 slat (usec): min=37, max=2830, avg=142.96, stdev=198.73 00:12:36.000 clat (msec): min=23, max=196, avg=64.25, stdev=24.53 00:12:36.000 lat (msec): min=23, max=196, avg=64.39, stdev=24.55 00:12:36.000 clat percentiles (msec): 00:12:36.000 | 1.00th=[ 37], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 45], 00:12:36.000 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 64], 00:12:36.000 | 70.00th=[ 69], 80.00th=[ 79], 90.00th=[ 96], 95.00th=[ 111], 00:12:36.000 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 182], 99.95th=[ 197], 00:12:36.000 | 99.99th=[ 197] 00:12:36.000 bw ( KiB/s): min= 4864, max=22272, per=1.10%, avg=13722.68, stdev=5279.87, samples=19 00:12:36.000 iops : min= 38, max= 174, avg=107.11, stdev=41.25, samples=19 00:12:36.000 lat (msec) : 4=0.97%, 10=29.19%, 20=15.06%, 50=18.37%, 100=32.16% 00:12:36.000 lat (msec) : 250=4.24% 00:12:36.000 cpu : usr=0.74%, sys=0.42%, ctx=3469, majf=0, minf=3 00:12:36.000 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.001 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.001 issued rwts: total=960,1092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.001 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.001 job69: (groupid=0, jobs=1): err= 0: pid=81847: Tue Jul 23 05:03:36 2024 00:12:36.001 read: IOPS=113, BW=14.2MiB/s (14.9MB/s)(120MiB/8457msec) 00:12:36.001 slat (usec): min=7, max=4386, avg=70.97, stdev=210.33 00:12:36.001 clat (msec): min=2, max=209, avg=13.05, stdev=19.35 00:12:36.001 lat (msec): min=2, max=209, avg=13.12, stdev=19.35 00:12:36.001 clat percentiles (msec): 00:12:36.001 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:12:36.001 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:12:36.001 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 20], 95.00th=[ 33], 00:12:36.001 | 99.00th=[ 94], 99.50th=[ 207], 99.90th=[ 209], 99.95th=[ 209], 00:12:36.001 | 99.99th=[ 209] 00:12:36.001 write: IOPS=117, BW=14.7MiB/s (15.4MB/s)(124MiB/8427msec); 0 zone resets 00:12:36.001 slat (usec): min=38, max=3488, avg=144.48, stdev=255.30 00:12:36.001 clat (msec): min=20, max=242, avg=67.36, stdev=27.61 00:12:36.001 lat (msec): min=20, max=242, avg=67.51, stdev=27.61 00:12:36.001 clat percentiles (msec): 00:12:36.001 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 48], 00:12:36.001 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 66], 00:12:36.001 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 123], 00:12:36.001 | 99.00th=[ 171], 99.50th=[ 209], 99.90th=[ 243], 99.95th=[ 243], 00:12:36.001 | 99.99th=[ 243] 00:12:36.001 bw ( KiB/s): min= 2810, max=20264, per=1.03%, avg=12840.68, stdev=6098.86, samples=19 00:12:36.001 iops : min= 21, max= 158, avg=100.16, stdev=47.74, samples=19 00:12:36.001 lat (msec) : 4=1.18%, 10=30.58%, 20=12.91%, 50=16.19%, 100=34.63% 00:12:36.001 lat (msec) : 250=4.51% 00:12:36.001 cpu : usr=0.70%, sys=0.39%, ctx=3295, majf=0, minf=9 00:12:36.001 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.001 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.001 issued rwts: total=960,992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.001 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.001 job70: (groupid=0, jobs=1): err= 0: pid=81852: Tue Jul 23 05:03:36 2024 00:12:36.001 read: IOPS=76, BW=9798KiB/s (10.0MB/s)(80.0MiB/8361msec) 00:12:36.001 slat (usec): min=6, max=1652, avg=76.10, stdev=159.26 00:12:36.001 clat (msec): min=3, max=401, avg=20.48, stdev=40.25 00:12:36.001 lat (msec): min=4, max=401, avg=20.55, stdev=40.26 00:12:36.001 clat percentiles (msec): 00:12:36.001 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:12:36.001 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 16], 00:12:36.001 | 70.00th=[ 19], 80.00th=[ 21], 90.00th=[ 30], 95.00th=[ 45], 00:12:36.001 | 99.00th=[ 317], 99.50th=[ 397], 99.90th=[ 401], 99.95th=[ 401], 00:12:36.001 | 99.99th=[ 401] 00:12:36.001 write: IOPS=79, BW=9.97MiB/s (10.5MB/s)(83.5MiB/8371msec); 0 zone resets 00:12:36.001 slat (usec): min=41, max=3305, avg=149.78, stdev=246.21 00:12:36.001 clat (msec): min=18, max=379, avg=99.24, stdev=50.70 00:12:36.001 lat (msec): min=18, max=379, avg=99.39, stdev=50.69 00:12:36.001 clat percentiles (msec): 00:12:36.001 | 1.00th=[ 24], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 65], 00:12:36.001 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 90], 00:12:36.001 | 70.00th=[ 110], 80.00th=[ 129], 90.00th=[ 161], 95.00th=[ 211], 00:12:36.001 | 99.00th=[ 296], 99.50th=[ 338], 99.90th=[ 380], 99.95th=[ 380], 00:12:36.001 | 99.99th=[ 380] 00:12:36.001 bw ( KiB/s): min= 1792, max=16384, per=0.72%, avg=8906.11, stdev=4034.72, samples=19 00:12:36.001 iops : min= 14, max= 128, avg=69.58, stdev=31.52, samples=19 00:12:36.001 lat (msec) : 4=0.08%, 10=18.58%, 20=20.18%, 50=8.64%, 100=33.87% 00:12:36.001 lat (msec) : 250=17.20%, 500=1.45% 00:12:36.001 cpu : usr=0.50%, sys=0.24%, ctx=2248, majf=0, minf=3 00:12:36.001 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.001 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.001 issued rwts: total=640,668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.001 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.001 job71: (groupid=0, jobs=1): err= 0: pid=81853: Tue Jul 23 05:03:36 2024 00:12:36.001 read: IOPS=78, BW=9.85MiB/s (10.3MB/s)(80.0MiB/8124msec) 00:12:36.001 slat (usec): min=5, max=1239, avg=54.05, stdev=100.72 00:12:36.001 clat (usec): min=5903, max=55327, avg=12780.15, stdev=6429.92 00:12:36.001 lat (usec): min=5925, max=55346, avg=12834.20, stdev=6422.53 00:12:36.001 clat percentiles (usec): 00:12:36.001 | 1.00th=[ 6063], 5.00th=[ 6783], 10.00th=[ 7767], 20.00th=[ 8586], 00:12:36.001 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[11338], 60.00th=[12125], 00:12:36.001 | 70.00th=[13042], 80.00th=[15008], 90.00th=[18482], 95.00th=[26084], 00:12:36.001 | 99.00th=[41681], 99.50th=[43254], 99.90th=[55313], 99.95th=[55313], 00:12:36.001 | 99.99th=[55313] 00:12:36.001 write: IOPS=85, BW=10.7MiB/s (11.3MB/s)(96.9MiB/9022msec); 0 zone resets 00:12:36.001 slat (usec): min=38, max=2275, avg=153.72, stdev=218.62 00:12:36.001 clat (msec): min=33, max=396, avg=92.13, stdev=46.80 00:12:36.001 lat (msec): min=33, max=396, avg=92.28, stdev=46.83 00:12:36.001 clat percentiles (msec): 00:12:36.001 | 1.00th=[ 40], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 65], 00:12:36.001 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 81], 00:12:36.001 | 70.00th=[ 92], 80.00th=[ 109], 90.00th=[ 148], 95.00th=[ 190], 00:12:36.001 | 99.00th=[ 279], 99.50th=[ 359], 99.90th=[ 397], 99.95th=[ 397], 00:12:36.001 | 99.99th=[ 397] 00:12:36.001 bw ( KiB/s): min= 2048, max=14592, per=0.79%, avg=9827.75, stdev=3949.04, samples=20 00:12:36.001 iops : min= 16, max= 114, avg=76.65, stdev=30.90, samples=20 00:12:36.001 lat (msec) : 10=16.82%, 20=24.66%, 50=4.24%, 100=40.71%, 250=12.65% 00:12:36.001 lat (msec) : 500=0.92% 00:12:36.001 cpu : usr=0.56%, sys=0.27%, ctx=2379, majf=0, minf=3 00:12:36.002 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.002 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.002 issued rwts: total=640,775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.002 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.002 job72: (groupid=0, jobs=1): err= 0: pid=81854: Tue Jul 23 05:03:36 2024 00:12:36.002 read: IOPS=74, BW=9475KiB/s (9702kB/s)(80.0MiB/8646msec) 00:12:36.002 slat (usec): min=5, max=3957, avg=77.25, stdev=215.63 00:12:36.002 clat (msec): min=7, max=137, avg=20.39, stdev=16.43 00:12:36.002 lat (msec): min=7, max=137, avg=20.46, stdev=16.42 00:12:36.002 clat percentiles (msec): 00:12:36.002 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 14], 00:12:36.002 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 18], 00:12:36.002 | 70.00th=[ 19], 80.00th=[ 22], 90.00th=[ 28], 95.00th=[ 39], 00:12:36.002 | 99.00th=[ 115], 99.50th=[ 128], 99.90th=[ 138], 99.95th=[ 138], 00:12:36.002 | 99.99th=[ 138] 00:12:36.002 write: IOPS=91, BW=11.5MiB/s (12.0MB/s)(96.2MiB/8405msec); 0 zone resets 00:12:36.002 slat (usec): min=37, max=4070, avg=145.38, stdev=252.82 00:12:36.002 clat (msec): min=40, max=394, avg=86.32, stdev=42.81 00:12:36.002 lat (msec): min=40, max=394, avg=86.46, stdev=42.81 00:12:36.002 clat percentiles (msec): 00:12:36.002 | 1.00th=[ 47], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 63], 00:12:36.002 | 30.00th=[ 66], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 78], 00:12:36.002 | 70.00th=[ 83], 80.00th=[ 97], 90.00th=[ 123], 95.00th=[ 182], 00:12:36.002 | 99.00th=[ 284], 99.50th=[ 300], 99.90th=[ 397], 99.95th=[ 397], 00:12:36.002 | 99.99th=[ 397] 00:12:36.002 bw ( KiB/s): min= 768, max=16128, per=0.79%, avg=9765.85, stdev=5003.30, samples=20 00:12:36.002 iops : min= 6, max= 126, avg=76.15, stdev=39.17, samples=20 00:12:36.002 lat (msec) : 10=3.05%, 20=31.49%, 50=9.43%, 100=45.53%, 250=9.36% 00:12:36.002 lat (msec) : 500=1.13% 00:12:36.002 cpu : usr=0.48%, sys=0.33%, ctx=2364, majf=0, minf=3 00:12:36.002 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.002 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.002 issued rwts: total=640,770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.002 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.002 job73: (groupid=0, jobs=1): err= 0: pid=81856: Tue Jul 23 05:03:36 2024 00:12:36.002 read: IOPS=77, BW=9942KiB/s (10.2MB/s)(80.0MiB/8240msec) 00:12:36.002 slat (usec): min=7, max=4617, avg=72.74, stdev=214.99 00:12:36.002 clat (usec): min=4259, max=73130, avg=16246.02, stdev=9748.71 00:12:36.002 lat (usec): min=4416, max=73145, avg=16318.76, stdev=9750.90 00:12:36.002 clat percentiles (usec): 00:12:36.002 | 1.00th=[ 4948], 5.00th=[ 6259], 10.00th=[ 8094], 20.00th=[ 9634], 00:12:36.002 | 30.00th=[11469], 40.00th=[13173], 50.00th=[14353], 60.00th=[15533], 00:12:36.002 | 70.00th=[17171], 80.00th=[19530], 90.00th=[24511], 95.00th=[38536], 00:12:36.002 | 99.00th=[57934], 99.50th=[57934], 99.90th=[72877], 99.95th=[72877], 00:12:36.002 | 99.99th=[72877] 00:12:36.002 write: IOPS=82, BW=10.4MiB/s (10.9MB/s)(90.5MiB/8733msec); 0 zone resets 00:12:36.002 slat (usec): min=38, max=3079, avg=153.59, stdev=254.72 00:12:36.002 clat (msec): min=53, max=378, avg=95.56, stdev=46.67 00:12:36.002 lat (msec): min=53, max=378, avg=95.71, stdev=46.68 00:12:36.002 clat percentiles (msec): 00:12:36.002 | 1.00th=[ 58], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 64], 00:12:36.002 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 84], 00:12:36.002 | 70.00th=[ 100], 80.00th=[ 125], 90.00th=[ 157], 95.00th=[ 180], 00:12:36.002 | 99.00th=[ 284], 99.50th=[ 317], 99.90th=[ 380], 99.95th=[ 380], 00:12:36.002 | 99.99th=[ 380] 00:12:36.002 bw ( KiB/s): min= 2304, max=15104, per=0.74%, avg=9176.25, stdev=4244.95, samples=20 00:12:36.002 iops : min= 18, max= 118, avg=71.55, stdev=33.24, samples=20 00:12:36.002 lat (msec) : 10=10.78%, 20=27.86%, 50=7.33%, 100=38.20%, 250=14.88% 00:12:36.002 lat (msec) : 500=0.95% 00:12:36.002 cpu : usr=0.46%, sys=0.31%, ctx=2332, majf=0, minf=7 00:12:36.002 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.002 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.002 issued rwts: total=640,724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.002 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.002 job74: (groupid=0, jobs=1): err= 0: pid=81862: Tue Jul 23 05:03:36 2024 00:12:36.002 read: IOPS=75, BW=9706KiB/s (9939kB/s)(80.0MiB/8440msec) 00:12:36.002 slat (usec): min=5, max=1408, avg=76.00, stdev=157.74 00:12:36.002 clat (usec): min=8309, max=81581, avg=18626.05, stdev=8865.24 00:12:36.002 lat (usec): min=9284, max=81593, avg=18702.04, stdev=8861.24 00:12:36.002 clat percentiles (usec): 00:12:36.002 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10552], 20.00th=[12256], 00:12:36.002 | 30.00th=[13566], 40.00th=[14877], 50.00th=[16909], 60.00th=[18220], 00:12:36.002 | 70.00th=[19792], 80.00th=[21890], 90.00th=[30540], 95.00th=[34866], 00:12:36.002 | 99.00th=[50070], 99.50th=[66323], 99.90th=[81265], 99.95th=[81265], 00:12:36.002 | 99.99th=[81265] 00:12:36.002 write: IOPS=92, BW=11.6MiB/s (12.2MB/s)(99.1MiB/8552msec); 0 zone resets 00:12:36.002 slat (usec): min=32, max=3812, avg=158.73, stdev=262.11 00:12:36.002 clat (msec): min=25, max=310, avg=85.28, stdev=39.54 00:12:36.002 lat (msec): min=25, max=310, avg=85.44, stdev=39.55 00:12:36.002 clat percentiles (msec): 00:12:36.002 | 1.00th=[ 33], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 64], 00:12:36.002 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 79], 00:12:36.002 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 121], 95.00th=[ 163], 00:12:36.002 | 99.00th=[ 264], 99.50th=[ 300], 99.90th=[ 313], 99.95th=[ 313], 00:12:36.002 | 99.99th=[ 313] 00:12:36.002 bw ( KiB/s): min= 1792, max=15360, per=0.81%, avg=10045.00, stdev=4840.09, samples=20 00:12:36.002 iops : min= 14, max= 120, avg=78.35, stdev=37.72, samples=20 00:12:36.002 lat (msec) : 10=2.16%, 20=29.80%, 50=12.70%, 100=46.13%, 250=8.51% 00:12:36.002 lat (msec) : 500=0.70% 00:12:36.002 cpu : usr=0.49%, sys=0.33%, ctx=2441, majf=0, minf=1 00:12:36.002 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.002 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.002 issued rwts: total=640,793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.002 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.002 job75: (groupid=0, jobs=1): err= 0: pid=81864: Tue Jul 23 05:03:36 2024 00:12:36.002 read: IOPS=71, BW=9201KiB/s (9422kB/s)(80.0MiB/8903msec) 00:12:36.002 slat (usec): min=6, max=1619, avg=71.02, stdev=142.82 00:12:36.002 clat (msec): min=3, max=166, avg=21.51, stdev=19.06 00:12:36.002 lat (msec): min=3, max=166, avg=21.58, stdev=19.08 00:12:36.002 clat percentiles (msec): 00:12:36.002 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:12:36.002 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 18], 00:12:36.002 | 70.00th=[ 22], 80.00th=[ 27], 90.00th=[ 37], 95.00th=[ 51], 00:12:36.002 | 99.00th=[ 122], 99.50th=[ 146], 99.90th=[ 167], 99.95th=[ 167], 00:12:36.002 | 99.99th=[ 167] 00:12:36.002 write: IOPS=96, BW=12.1MiB/s (12.7MB/s)(100MiB/8286msec); 0 zone resets 00:12:36.002 slat (usec): min=39, max=1545, avg=137.57, stdev=177.97 00:12:36.003 clat (usec): min=1422, max=370157, avg=82248.11, stdev=41485.67 00:12:36.003 lat (usec): min=1502, max=370288, avg=82385.68, stdev=41484.75 00:12:36.003 clat percentiles (msec): 00:12:36.003 | 1.00th=[ 7], 5.00th=[ 25], 10.00th=[ 59], 20.00th=[ 64], 00:12:36.003 | 30.00th=[ 66], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 77], 00:12:36.003 | 70.00th=[ 84], 80.00th=[ 97], 90.00th=[ 124], 95.00th=[ 165], 00:12:36.003 | 99.00th=[ 249], 99.50th=[ 284], 99.90th=[ 372], 99.95th=[ 372], 00:12:36.003 | 99.99th=[ 372] 00:12:36.003 bw ( KiB/s): min= 2048, max=24064, per=0.86%, avg=10679.89, stdev=5379.12, samples=19 00:12:36.003 iops : min= 16, max= 188, avg=83.16, stdev=42.23, samples=19 00:12:36.003 lat (msec) : 2=0.14%, 4=0.35%, 10=5.97%, 20=25.83%, 50=12.92% 00:12:36.003 lat (msec) : 100=43.19%, 250=11.04%, 500=0.56% 00:12:36.003 cpu : usr=0.56%, sys=0.28%, ctx=2451, majf=0, minf=1 00:12:36.003 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.003 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.003 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.003 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.003 job76: (groupid=0, jobs=1): err= 0: pid=81865: Tue Jul 23 05:03:36 2024 00:12:36.003 read: IOPS=78, BW=9.80MiB/s (10.3MB/s)(80.0MiB/8162msec) 00:12:36.003 slat (usec): min=6, max=974, avg=58.62, stdev=100.56 00:12:36.003 clat (usec): min=4317, max=58123, avg=13145.14, stdev=7414.16 00:12:36.003 lat (usec): min=4351, max=58135, avg=13203.75, stdev=7416.51 00:12:36.003 clat percentiles (usec): 00:12:36.003 | 1.00th=[ 4686], 5.00th=[ 5211], 10.00th=[ 5997], 20.00th=[ 7373], 00:12:36.003 | 30.00th=[ 8717], 40.00th=[ 9896], 50.00th=[11600], 60.00th=[13304], 00:12:36.003 | 70.00th=[15139], 80.00th=[16909], 90.00th=[21365], 95.00th=[26084], 00:12:36.003 | 99.00th=[43254], 99.50th=[50594], 99.90th=[57934], 99.95th=[57934], 00:12:36.003 | 99.99th=[57934] 00:12:36.003 write: IOPS=83, BW=10.5MiB/s (11.0MB/s)(94.4MiB/8991msec); 0 zone resets 00:12:36.003 slat (usec): min=36, max=3164, avg=180.65, stdev=327.77 00:12:36.003 clat (msec): min=47, max=371, avg=94.29, stdev=43.16 00:12:36.003 lat (msec): min=47, max=371, avg=94.47, stdev=43.16 00:12:36.003 clat percentiles (msec): 00:12:36.003 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 62], 20.00th=[ 65], 00:12:36.003 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 85], 00:12:36.003 | 70.00th=[ 96], 80.00th=[ 121], 90.00th=[ 153], 95.00th=[ 190], 00:12:36.003 | 99.00th=[ 245], 99.50th=[ 292], 99.90th=[ 372], 99.95th=[ 372], 00:12:36.003 | 99.99th=[ 372] 00:12:36.003 bw ( KiB/s): min= 2048, max=15104, per=0.77%, avg=9559.85, stdev=3949.63, samples=20 00:12:36.003 iops : min= 16, max= 118, avg=74.55, stdev=30.91, samples=20 00:12:36.003 lat (msec) : 10=18.35%, 20=22.51%, 50=4.87%, 100=39.50%, 250=14.34% 00:12:36.003 lat (msec) : 500=0.43% 00:12:36.003 cpu : usr=0.47%, sys=0.32%, ctx=2420, majf=0, minf=1 00:12:36.003 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.003 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.003 issued rwts: total=640,755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.003 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.003 job77: (groupid=0, jobs=1): err= 0: pid=81866: Tue Jul 23 05:03:36 2024 00:12:36.003 read: IOPS=79, BW=9.91MiB/s (10.4MB/s)(80.0MiB/8076msec) 00:12:36.003 slat (usec): min=6, max=1516, avg=70.21, stdev=140.49 00:12:36.003 clat (msec): min=3, max=161, avg=12.90, stdev=15.82 00:12:36.003 lat (msec): min=3, max=161, avg=12.97, stdev=15.82 00:12:36.003 clat percentiles (msec): 00:12:36.003 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:12:36.003 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:12:36.003 | 70.00th=[ 10], 80.00th=[ 14], 90.00th=[ 21], 95.00th=[ 40], 00:12:36.003 | 99.00th=[ 82], 99.50th=[ 93], 99.90th=[ 161], 99.95th=[ 161], 00:12:36.003 | 99.99th=[ 161] 00:12:36.003 write: IOPS=74, BW=9505KiB/s (9733kB/s)(83.5MiB/8996msec); 0 zone resets 00:12:36.003 slat (usec): min=38, max=1974, avg=154.03, stdev=225.35 00:12:36.003 clat (msec): min=56, max=267, avg=107.05, stdev=45.59 00:12:36.003 lat (msec): min=56, max=267, avg=107.20, stdev=45.60 00:12:36.003 clat percentiles (msec): 00:12:36.003 | 1.00th=[ 57], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 67], 00:12:36.003 | 30.00th=[ 71], 40.00th=[ 83], 50.00th=[ 97], 60.00th=[ 109], 00:12:36.003 | 70.00th=[ 128], 80.00th=[ 140], 90.00th=[ 171], 95.00th=[ 209], 00:12:36.003 | 99.00th=[ 243], 99.50th=[ 259], 99.90th=[ 268], 99.95th=[ 268], 00:12:36.003 | 99.99th=[ 268] 00:12:36.003 bw ( KiB/s): min= 1792, max=16128, per=0.69%, avg=8527.21, stdev=3771.29, samples=19 00:12:36.003 iops : min= 14, max= 126, avg=66.47, stdev=29.47, samples=19 00:12:36.003 lat (msec) : 4=0.23%, 10=34.02%, 20=9.71%, 50=2.98%, 100=28.13% 00:12:36.003 lat (msec) : 250=24.54%, 500=0.38% 00:12:36.003 cpu : usr=0.43%, sys=0.28%, ctx=2295, majf=0, minf=5 00:12:36.003 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.003 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.003 issued rwts: total=640,668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.003 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.003 job78: (groupid=0, jobs=1): err= 0: pid=81867: Tue Jul 23 05:03:36 2024 00:12:36.003 read: IOPS=73, BW=9460KiB/s (9687kB/s)(80.0MiB/8660msec) 00:12:36.003 slat (usec): min=7, max=1461, avg=68.05, stdev=145.23 00:12:36.003 clat (usec): min=8413, max=83720, avg=18743.88, stdev=10708.78 00:12:36.003 lat (usec): min=8524, max=83733, avg=18811.94, stdev=10700.12 00:12:36.003 clat percentiles (usec): 00:12:36.003 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11207], 00:12:36.003 | 30.00th=[12256], 40.00th=[13960], 50.00th=[16188], 60.00th=[17695], 00:12:36.003 | 70.00th=[19268], 80.00th=[22938], 90.00th=[30016], 95.00th=[37487], 00:12:36.003 | 99.00th=[66847], 99.50th=[76022], 99.90th=[83362], 99.95th=[83362], 00:12:36.003 | 99.99th=[83362] 00:12:36.003 write: IOPS=91, BW=11.4MiB/s (11.9MB/s)(97.4MiB/8559msec); 0 zone resets 00:12:36.003 slat (usec): min=38, max=3000, avg=148.92, stdev=231.68 00:12:36.003 clat (msec): min=7, max=281, avg=86.99, stdev=39.65 00:12:36.003 lat (msec): min=7, max=281, avg=87.14, stdev=39.67 00:12:36.003 clat percentiles (msec): 00:12:36.003 | 1.00th=[ 9], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 66], 00:12:36.003 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:12:36.003 | 70.00th=[ 90], 80.00th=[ 106], 90.00th=[ 128], 95.00th=[ 165], 00:12:36.003 | 99.00th=[ 247], 99.50th=[ 257], 99.90th=[ 284], 99.95th=[ 284], 00:12:36.003 | 99.99th=[ 284] 00:12:36.003 bw ( KiB/s): min= 2560, max=21803, per=0.84%, avg=10390.11, stdev=4639.30, samples=19 00:12:36.003 iops : min= 20, max= 170, avg=81.11, stdev=36.18, samples=19 00:12:36.003 lat (msec) : 10=4.65%, 20=30.51%, 50=10.78%, 100=41.44%, 250=12.12% 00:12:36.003 lat (msec) : 500=0.49% 00:12:36.003 cpu : usr=0.50%, sys=0.31%, ctx=2371, majf=0, minf=3 00:12:36.003 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.003 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.003 issued rwts: total=640,779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.003 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.003 job79: (groupid=0, jobs=1): err= 0: pid=81868: Tue Jul 23 05:03:36 2024 00:12:36.003 read: IOPS=69, BW=8869KiB/s (9081kB/s)(67.2MiB/7765msec) 00:12:36.003 slat (usec): min=7, max=1102, avg=56.26, stdev=92.18 00:12:36.003 clat (msec): min=4, max=160, avg=16.43, stdev=23.95 00:12:36.003 lat (msec): min=4, max=160, avg=16.49, stdev=23.94 00:12:36.003 clat percentiles (msec): 00:12:36.003 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:12:36.003 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 11], 60.00th=[ 13], 00:12:36.003 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 23], 95.00th=[ 40], 00:12:36.003 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 161], 99.95th=[ 161], 00:12:36.003 | 99.99th=[ 161] 00:12:36.003 write: IOPS=71, BW=9210KiB/s (9431kB/s)(80.0MiB/8895msec); 0 zone resets 00:12:36.003 slat (usec): min=38, max=2017, avg=131.54, stdev=191.83 00:12:36.003 clat (msec): min=55, max=369, avg=110.63, stdev=55.10 00:12:36.003 lat (msec): min=55, max=369, avg=110.76, stdev=55.10 00:12:36.003 clat percentiles (msec): 00:12:36.003 | 1.00th=[ 57], 5.00th=[ 59], 10.00th=[ 62], 20.00th=[ 67], 00:12:36.003 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 109], 00:12:36.003 | 70.00th=[ 128], 80.00th=[ 146], 90.00th=[ 192], 95.00th=[ 222], 00:12:36.003 | 99.00th=[ 292], 99.50th=[ 334], 99.90th=[ 372], 99.95th=[ 372], 00:12:36.003 | 99.99th=[ 372] 00:12:36.003 bw ( KiB/s): min= 1792, max=15872, per=0.66%, avg=8206.84, stdev=3599.62, samples=19 00:12:36.003 iops : min= 14, max= 124, avg=63.95, stdev=28.01, samples=19 00:12:36.003 lat (msec) : 10=22.16%, 20=17.40%, 50=3.99%, 100=30.65%, 250=24.28% 00:12:36.003 lat (msec) : 500=1.53% 00:12:36.003 cpu : usr=0.44%, sys=0.22%, ctx=2002, majf=0, minf=7 00:12:36.003 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.003 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.003 issued rwts: total=538,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.003 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.003 job80: (groupid=0, jobs=1): err= 0: pid=81869: Tue Jul 23 05:03:36 2024 00:12:36.003 read: IOPS=93, BW=11.7MiB/s (12.3MB/s)(100MiB/8532msec) 00:12:36.003 slat (usec): min=5, max=1222, avg=55.45, stdev=112.05 00:12:36.003 clat (usec): min=4316, max=78801, avg=15548.99, stdev=8252.48 00:12:36.003 lat (usec): min=4326, max=78810, avg=15604.44, stdev=8252.13 00:12:36.003 clat percentiles (usec): 00:12:36.003 | 1.00th=[ 4621], 5.00th=[ 5800], 10.00th=[ 8356], 20.00th=[ 9372], 00:12:36.003 | 30.00th=[11207], 40.00th=[12780], 50.00th=[14091], 60.00th=[15270], 00:12:36.003 | 70.00th=[17171], 80.00th=[19530], 90.00th=[25297], 95.00th=[27919], 00:12:36.003 | 99.00th=[46400], 99.50th=[58459], 99.90th=[79168], 99.95th=[79168], 00:12:36.003 | 99.99th=[79168] 00:12:36.003 write: IOPS=95, BW=11.9MiB/s (12.5MB/s)(101MiB/8466msec); 0 zone resets 00:12:36.004 slat (usec): min=31, max=3403, avg=139.12, stdev=221.92 00:12:36.004 clat (msec): min=30, max=267, avg=82.95, stdev=36.83 00:12:36.004 lat (msec): min=31, max=267, avg=83.09, stdev=36.83 00:12:36.004 clat percentiles (msec): 00:12:36.004 | 1.00th=[ 42], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:12:36.004 | 30.00th=[ 63], 40.00th=[ 66], 50.00th=[ 69], 60.00th=[ 75], 00:12:36.004 | 70.00th=[ 85], 80.00th=[ 102], 90.00th=[ 125], 95.00th=[ 178], 00:12:36.004 | 99.00th=[ 222], 99.50th=[ 232], 99.90th=[ 268], 99.95th=[ 268], 00:12:36.004 | 99.99th=[ 268] 00:12:36.004 bw ( KiB/s): min= 1792, max=16128, per=0.85%, avg=10543.84, stdev=5183.50, samples=19 00:12:36.004 iops : min= 14, max= 126, avg=82.16, stdev=40.41, samples=19 00:12:36.004 lat (msec) : 10=12.31%, 20=27.92%, 50=10.07%, 100=39.37%, 250=10.20% 00:12:36.004 lat (msec) : 500=0.12% 00:12:36.004 cpu : usr=0.48%, sys=0.41%, ctx=2589, majf=0, minf=5 00:12:36.004 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.004 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.004 issued rwts: total=800,808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.004 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.004 job81: (groupid=0, jobs=1): err= 0: pid=81870: Tue Jul 23 05:03:36 2024 00:12:36.004 read: IOPS=85, BW=10.6MiB/s (11.1MB/s)(80.0MiB/7526msec) 00:12:36.004 slat (usec): min=6, max=1306, avg=65.31, stdev=128.51 00:12:36.004 clat (msec): min=4, max=211, avg=19.30, stdev=27.62 00:12:36.004 lat (msec): min=4, max=211, avg=19.36, stdev=27.62 00:12:36.004 clat percentiles (msec): 00:12:36.004 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:12:36.004 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 13], 60.00th=[ 15], 00:12:36.004 | 70.00th=[ 17], 80.00th=[ 20], 90.00th=[ 27], 95.00th=[ 78], 00:12:36.004 | 99.00th=[ 161], 99.50th=[ 184], 99.90th=[ 211], 99.95th=[ 211], 00:12:36.004 | 99.99th=[ 211] 00:12:36.004 write: IOPS=76, BW=9821KiB/s (10.1MB/s)(81.2MiB/8472msec); 0 zone resets 00:12:36.004 slat (usec): min=38, max=3147, avg=164.46, stdev=292.88 00:12:36.004 clat (msec): min=36, max=364, avg=103.57, stdev=40.19 00:12:36.004 lat (msec): min=36, max=364, avg=103.73, stdev=40.18 00:12:36.004 clat percentiles (msec): 00:12:36.004 | 1.00th=[ 55], 5.00th=[ 60], 10.00th=[ 63], 20.00th=[ 69], 00:12:36.004 | 30.00th=[ 77], 40.00th=[ 86], 50.00th=[ 100], 60.00th=[ 109], 00:12:36.004 | 70.00th=[ 118], 80.00th=[ 130], 90.00th=[ 155], 95.00th=[ 165], 00:12:36.004 | 99.00th=[ 245], 99.50th=[ 330], 99.90th=[ 363], 99.95th=[ 363], 00:12:36.004 | 99.99th=[ 363] 00:12:36.004 bw ( KiB/s): min= 1792, max=15104, per=0.69%, avg=8527.53, stdev=3659.17, samples=19 00:12:36.004 iops : min= 14, max= 118, avg=66.47, stdev=28.59, samples=19 00:12:36.004 lat (msec) : 10=20.16%, 20=20.54%, 50=6.36%, 100=26.43%, 250=26.05% 00:12:36.004 lat (msec) : 500=0.47% 00:12:36.004 cpu : usr=0.41%, sys=0.32%, ctx=2253, majf=0, minf=5 00:12:36.004 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.004 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.004 issued rwts: total=640,650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.004 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.004 job82: (groupid=0, jobs=1): err= 0: pid=81871: Tue Jul 23 05:03:36 2024 00:12:36.004 read: IOPS=74, BW=9491KiB/s (9719kB/s)(80.0MiB/8631msec) 00:12:36.004 slat (usec): min=7, max=2077, avg=67.11, stdev=155.19 00:12:36.004 clat (usec): min=7622, max=46413, avg=15689.37, stdev=5885.89 00:12:36.004 lat (usec): min=7701, max=46472, avg=15756.47, stdev=5872.95 00:12:36.004 clat percentiles (usec): 00:12:36.004 | 1.00th=[ 8029], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[11207], 00:12:36.004 | 30.00th=[11863], 40.00th=[13304], 50.00th=[14353], 60.00th=[15270], 00:12:36.004 | 70.00th=[16909], 80.00th=[19268], 90.00th=[24249], 95.00th=[27657], 00:12:36.004 | 99.00th=[36963], 99.50th=[38536], 99.90th=[46400], 99.95th=[46400], 00:12:36.004 | 99.99th=[46400] 00:12:36.004 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(97.1MiB/8769msec); 0 zone resets 00:12:36.004 slat (usec): min=35, max=2719, avg=153.92, stdev=226.81 00:12:36.004 clat (msec): min=31, max=432, avg=89.45, stdev=46.45 00:12:36.004 lat (msec): min=31, max=432, avg=89.60, stdev=46.45 00:12:36.004 clat percentiles (msec): 00:12:36.004 | 1.00th=[ 45], 5.00th=[ 56], 10.00th=[ 57], 20.00th=[ 60], 00:12:36.004 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 75], 60.00th=[ 83], 00:12:36.004 | 70.00th=[ 92], 80.00th=[ 107], 90.00th=[ 140], 95.00th=[ 174], 00:12:36.004 | 99.00th=[ 284], 99.50th=[ 355], 99.90th=[ 435], 99.95th=[ 435], 00:12:36.004 | 99.99th=[ 435] 00:12:36.004 bw ( KiB/s): min= 256, max=16673, per=0.79%, avg=9856.00, stdev=5364.09, samples=20 00:12:36.004 iops : min= 2, max= 130, avg=76.85, stdev=42.01, samples=20 00:12:36.004 lat (msec) : 10=4.45%, 20=32.25%, 50=9.17%, 100=40.86%, 250=12.28% 00:12:36.004 lat (msec) : 500=0.99% 00:12:36.004 cpu : usr=0.55%, sys=0.24%, ctx=2456, majf=0, minf=1 00:12:36.004 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.004 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.004 issued rwts: total=640,777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.004 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.004 job83: (groupid=0, jobs=1): err= 0: pid=81872: Tue Jul 23 05:03:36 2024 00:12:36.004 read: IOPS=77, BW=9897KiB/s (10.1MB/s)(80.0MiB/8277msec) 00:12:36.004 slat (usec): min=5, max=1391, avg=63.41, stdev=120.31 00:12:36.004 clat (msec): min=3, max=123, avg=15.20, stdev=17.53 00:12:36.004 lat (msec): min=4, max=123, avg=15.26, stdev=17.52 00:12:36.004 clat percentiles (msec): 00:12:36.004 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 7], 00:12:36.004 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:12:36.004 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 24], 95.00th=[ 44], 00:12:36.004 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 124], 99.95th=[ 124], 00:12:36.004 | 99.99th=[ 124] 00:12:36.004 write: IOPS=79, BW=9.91MiB/s (10.4MB/s)(87.2MiB/8808msec); 0 zone resets 00:12:36.004 slat (usec): min=37, max=3568, avg=174.76, stdev=311.32 00:12:36.004 clat (msec): min=32, max=326, avg=100.11, stdev=41.40 00:12:36.004 lat (msec): min=32, max=327, avg=100.29, stdev=41.43 00:12:36.004 clat percentiles (msec): 00:12:36.004 | 1.00th=[ 47], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 67], 00:12:36.004 | 30.00th=[ 74], 40.00th=[ 83], 50.00th=[ 92], 60.00th=[ 103], 00:12:36.004 | 70.00th=[ 113], 80.00th=[ 126], 90.00th=[ 153], 95.00th=[ 171], 00:12:36.004 | 99.00th=[ 264], 99.50th=[ 271], 99.90th=[ 326], 99.95th=[ 326], 00:12:36.004 | 99.99th=[ 326] 00:12:36.004 bw ( KiB/s): min= 2048, max=15584, per=0.71%, avg=8814.21, stdev=3971.74, samples=19 00:12:36.004 iops : min= 16, max= 121, avg=68.68, stdev=30.98, samples=19 00:12:36.004 lat (msec) : 4=0.22%, 10=22.57%, 20=18.31%, 50=5.68%, 100=30.27% 00:12:36.004 lat (msec) : 250=22.20%, 500=0.75% 00:12:36.004 cpu : usr=0.50%, sys=0.25%, ctx=2366, majf=0, minf=9 00:12:36.004 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.004 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.004 issued rwts: total=640,698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.004 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.004 job84: (groupid=0, jobs=1): err= 0: pid=81873: Tue Jul 23 05:03:36 2024 00:12:36.004 read: IOPS=80, BW=10.0MiB/s (10.5MB/s)(80.0MiB/7964msec) 00:12:36.004 slat (usec): min=5, max=891, avg=61.56, stdev=113.19 00:12:36.004 clat (msec): min=4, max=353, avg=21.22, stdev=41.56 00:12:36.004 lat (msec): min=4, max=353, avg=21.28, stdev=41.57 00:12:36.004 clat percentiles (msec): 00:12:36.004 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:12:36.004 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:12:36.004 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 31], 95.00th=[ 77], 00:12:36.004 | 99.00th=[ 271], 99.50th=[ 321], 99.90th=[ 355], 99.95th=[ 355], 00:12:36.004 | 99.99th=[ 355] 00:12:36.004 write: IOPS=80, BW=10.1MiB/s (10.6MB/s)(83.8MiB/8321msec); 0 zone resets 00:12:36.004 slat (usec): min=37, max=2097, avg=145.08, stdev=226.09 00:12:36.004 clat (msec): min=35, max=271, avg=98.68, stdev=35.76 00:12:36.004 lat (msec): min=35, max=272, avg=98.83, stdev=35.76 00:12:36.004 clat percentiles (msec): 00:12:36.004 | 1.00th=[ 54], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 68], 00:12:36.004 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 89], 60.00th=[ 105], 00:12:36.004 | 70.00th=[ 116], 80.00th=[ 129], 90.00th=[ 148], 95.00th=[ 161], 00:12:36.004 | 99.00th=[ 203], 99.50th=[ 218], 99.90th=[ 271], 99.95th=[ 271], 00:12:36.004 | 99.99th=[ 271] 00:12:36.004 bw ( KiB/s): min= 1792, max=14592, per=0.69%, avg=8620.95, stdev=3752.12, samples=19 00:12:36.004 iops : min= 14, max= 114, avg=67.21, stdev=29.28, samples=19 00:12:36.004 lat (msec) : 10=22.75%, 20=17.94%, 50=5.04%, 100=30.23%, 250=23.28% 00:12:36.004 lat (msec) : 500=0.76% 00:12:36.004 cpu : usr=0.46%, sys=0.27%, ctx=2173, majf=0, minf=7 00:12:36.004 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.004 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.004 issued rwts: total=640,670,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.005 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.005 job85: (groupid=0, jobs=1): err= 0: pid=81874: Tue Jul 23 05:03:36 2024 00:12:36.005 read: IOPS=72, BW=9228KiB/s (9450kB/s)(80.0MiB/8877msec) 00:12:36.005 slat (usec): min=7, max=2236, avg=69.90, stdev=158.10 00:12:36.005 clat (usec): min=4603, max=82241, avg=14944.60, stdev=8428.66 00:12:36.005 lat (usec): min=4742, max=82257, avg=15014.50, stdev=8419.43 00:12:36.005 clat percentiles (usec): 00:12:36.005 | 1.00th=[ 7635], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[10028], 00:12:36.005 | 30.00th=[10683], 40.00th=[11600], 50.00th=[12780], 60.00th=[13829], 00:12:36.005 | 70.00th=[15401], 80.00th=[18220], 90.00th=[22676], 95.00th=[27395], 00:12:36.005 | 99.00th=[62653], 99.50th=[74974], 99.90th=[82314], 99.95th=[82314], 00:12:36.005 | 99.99th=[82314] 00:12:36.005 write: IOPS=90, BW=11.3MiB/s (11.9MB/s)(100MiB/8846msec); 0 zone resets 00:12:36.005 slat (usec): min=37, max=10090, avg=177.97, stdev=515.02 00:12:36.005 clat (msec): min=8, max=341, avg=87.08, stdev=41.57 00:12:36.005 lat (msec): min=8, max=341, avg=87.26, stdev=41.55 00:12:36.005 clat percentiles (msec): 00:12:36.005 | 1.00th=[ 21], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 59], 00:12:36.005 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 73], 60.00th=[ 82], 00:12:36.005 | 70.00th=[ 92], 80.00th=[ 110], 90.00th=[ 144], 95.00th=[ 167], 00:12:36.005 | 99.00th=[ 226], 99.50th=[ 268], 99.90th=[ 342], 99.95th=[ 342], 00:12:36.005 | 99.99th=[ 342] 00:12:36.005 bw ( KiB/s): min= 1792, max=19712, per=0.82%, avg=10148.05, stdev=5172.30, samples=20 00:12:36.005 iops : min= 14, max= 154, avg=79.05, stdev=40.60, samples=20 00:12:36.005 lat (msec) : 10=9.51%, 20=28.12%, 50=7.92%, 100=40.35%, 250=13.68% 00:12:36.005 lat (msec) : 500=0.42% 00:12:36.005 cpu : usr=0.49%, sys=0.34%, ctx=2410, majf=0, minf=5 00:12:36.005 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.005 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.005 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.005 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.005 job86: (groupid=0, jobs=1): err= 0: pid=81875: Tue Jul 23 05:03:36 2024 00:12:36.005 read: IOPS=73, BW=9377KiB/s (9602kB/s)(80.0MiB/8736msec) 00:12:36.005 slat (usec): min=6, max=1456, avg=65.70, stdev=126.62 00:12:36.005 clat (msec): min=6, max=127, avg=19.78, stdev=16.62 00:12:36.005 lat (msec): min=6, max=127, avg=19.84, stdev=16.62 00:12:36.005 clat percentiles (msec): 00:12:36.005 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:12:36.005 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:12:36.005 | 70.00th=[ 21], 80.00th=[ 25], 90.00th=[ 29], 95.00th=[ 46], 00:12:36.005 | 99.00th=[ 107], 99.50th=[ 117], 99.90th=[ 128], 99.95th=[ 128], 00:12:36.005 | 99.99th=[ 128] 00:12:36.005 write: IOPS=91, BW=11.5MiB/s (12.0MB/s)(96.9MiB/8453msec); 0 zone resets 00:12:36.005 slat (usec): min=38, max=2939, avg=147.31, stdev=240.12 00:12:36.005 clat (msec): min=23, max=366, avg=86.39, stdev=44.85 00:12:36.005 lat (msec): min=23, max=366, avg=86.54, stdev=44.84 00:12:36.005 clat percentiles (msec): 00:12:36.005 | 1.00th=[ 26], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:12:36.005 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 77], 00:12:36.005 | 70.00th=[ 89], 80.00th=[ 104], 90.00th=[ 138], 95.00th=[ 184], 00:12:36.005 | 99.00th=[ 271], 99.50th=[ 296], 99.90th=[ 368], 99.95th=[ 368], 00:12:36.005 | 99.99th=[ 368] 00:12:36.005 bw ( KiB/s): min= 1024, max=17152, per=0.83%, avg=10347.58, stdev=5246.33, samples=19 00:12:36.005 iops : min= 8, max= 134, avg=80.79, stdev=41.08, samples=19 00:12:36.005 lat (msec) : 10=4.03%, 20=27.49%, 50=12.58%, 100=42.90%, 250=12.01% 00:12:36.005 lat (msec) : 500=0.99% 00:12:36.005 cpu : usr=0.49%, sys=0.32%, ctx=2425, majf=0, minf=3 00:12:36.005 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.005 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.005 issued rwts: total=640,775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.005 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.005 job87: (groupid=0, jobs=1): err= 0: pid=81876: Tue Jul 23 05:03:36 2024 00:12:36.005 read: IOPS=88, BW=11.1MiB/s (11.6MB/s)(97.9MiB/8811msec) 00:12:36.005 slat (usec): min=7, max=1501, avg=56.92, stdev=119.61 00:12:36.005 clat (usec): min=5011, max=69524, avg=13122.49, stdev=6840.47 00:12:36.005 lat (usec): min=5059, max=69540, avg=13179.41, stdev=6834.56 00:12:36.005 clat percentiles (usec): 00:12:36.005 | 1.00th=[ 7046], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 9110], 00:12:36.005 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10945], 60.00th=[12256], 00:12:36.005 | 70.00th=[13566], 80.00th=[16188], 90.00th=[19530], 95.00th=[22676], 00:12:36.005 | 99.00th=[44303], 99.50th=[62653], 99.90th=[69731], 99.95th=[69731], 00:12:36.005 | 99.99th=[69731] 00:12:36.005 write: IOPS=91, BW=11.5MiB/s (12.0MB/s)(100MiB/8708msec); 0 zone resets 00:12:36.005 slat (usec): min=37, max=6374, avg=146.11, stdev=339.80 00:12:36.005 clat (msec): min=16, max=288, avg=86.32, stdev=38.09 00:12:36.005 lat (msec): min=16, max=288, avg=86.46, stdev=38.11 00:12:36.005 clat percentiles (msec): 00:12:36.005 | 1.00th=[ 23], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 59], 00:12:36.005 | 30.00th=[ 63], 40.00th=[ 68], 50.00th=[ 74], 60.00th=[ 81], 00:12:36.005 | 70.00th=[ 95], 80.00th=[ 111], 90.00th=[ 138], 95.00th=[ 163], 00:12:36.005 | 99.00th=[ 218], 99.50th=[ 228], 99.90th=[ 288], 99.95th=[ 288], 00:12:36.005 | 99.99th=[ 288] 00:12:36.005 bw ( KiB/s): min= 2560, max=19968, per=0.82%, avg=10240.40, stdev=5014.72, samples=20 00:12:36.005 iops : min= 20, max= 156, avg=79.95, stdev=39.15, samples=20 00:12:36.005 lat (msec) : 10=19.33%, 20=25.90%, 50=5.12%, 100=35.75%, 250=13.71% 00:12:36.005 lat (msec) : 500=0.19% 00:12:36.005 cpu : usr=0.54%, sys=0.33%, ctx=2581, majf=0, minf=1 00:12:36.005 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.005 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.005 issued rwts: total=783,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.005 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.005 job88: (groupid=0, jobs=1): err= 0: pid=81877: Tue Jul 23 05:03:36 2024 00:12:36.005 read: IOPS=75, BW=9721KiB/s (9954kB/s)(80.0MiB/8427msec) 00:12:36.005 slat (usec): min=7, max=1881, avg=62.88, stdev=138.79 00:12:36.005 clat (msec): min=3, max=146, avg=21.09, stdev=18.38 00:12:36.005 lat (msec): min=3, max=146, avg=21.15, stdev=18.38 00:12:36.005 clat percentiles (msec): 00:12:36.005 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:12:36.005 | 30.00th=[ 13], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 20], 00:12:36.005 | 70.00th=[ 22], 80.00th=[ 24], 90.00th=[ 33], 95.00th=[ 58], 00:12:36.005 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 146], 99.95th=[ 146], 00:12:36.005 | 99.99th=[ 146] 00:12:36.005 write: IOPS=84, BW=10.5MiB/s (11.0MB/s)(87.5MiB/8315msec); 0 zone resets 00:12:36.005 slat (usec): min=38, max=1623, avg=147.82, stdev=193.13 00:12:36.005 clat (msec): min=39, max=290, avg=94.17, stdev=37.96 00:12:36.006 lat (msec): min=39, max=290, avg=94.32, stdev=37.97 00:12:36.006 clat percentiles (msec): 00:12:36.006 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 58], 20.00th=[ 64], 00:12:36.006 | 30.00th=[ 69], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 94], 00:12:36.006 | 70.00th=[ 107], 80.00th=[ 122], 90.00th=[ 140], 95.00th=[ 163], 00:12:36.006 | 99.00th=[ 247], 99.50th=[ 268], 99.90th=[ 292], 99.95th=[ 292], 00:12:36.006 | 99.99th=[ 292] 00:12:36.006 bw ( KiB/s): min= 1792, max=15360, per=0.71%, avg=8843.95, stdev=4768.91, samples=19 00:12:36.006 iops : min= 14, max= 120, avg=68.95, stdev=37.27, samples=19 00:12:36.006 lat (msec) : 4=0.07%, 10=9.10%, 20=20.37%, 50=15.90%, 100=34.63% 00:12:36.006 lat (msec) : 250=19.40%, 500=0.52% 00:12:36.006 cpu : usr=0.47%, sys=0.27%, ctx=2340, majf=0, minf=3 00:12:36.006 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.006 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.006 issued rwts: total=640,700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.006 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.006 job89: (groupid=0, jobs=1): err= 0: pid=81878: Tue Jul 23 05:03:36 2024 00:12:36.006 read: IOPS=89, BW=11.1MiB/s (11.7MB/s)(100MiB/8981msec) 00:12:36.006 slat (usec): min=5, max=1840, avg=56.40, stdev=125.52 00:12:36.006 clat (usec): min=4199, max=43308, avg=9645.51, stdev=5041.94 00:12:36.006 lat (usec): min=4286, max=43322, avg=9701.90, stdev=5035.37 00:12:36.006 clat percentiles (usec): 00:12:36.006 | 1.00th=[ 4490], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6259], 00:12:36.006 | 30.00th=[ 7046], 40.00th=[ 7701], 50.00th=[ 8455], 60.00th=[ 8979], 00:12:36.006 | 70.00th=[ 9765], 80.00th=[11863], 90.00th=[14746], 95.00th=[17957], 00:12:36.006 | 99.00th=[34866], 99.50th=[35390], 99.90th=[43254], 99.95th=[43254], 00:12:36.006 | 99.99th=[43254] 00:12:36.006 write: IOPS=89, BW=11.2MiB/s (11.7MB/s)(101MiB/9085msec); 0 zone resets 00:12:36.006 slat (usec): min=30, max=3008, avg=159.78, stdev=286.86 00:12:36.006 clat (msec): min=4, max=244, avg=88.87, stdev=42.21 00:12:36.006 lat (msec): min=4, max=244, avg=89.03, stdev=42.22 00:12:36.006 clat percentiles (msec): 00:12:36.006 | 1.00th=[ 8], 5.00th=[ 53], 10.00th=[ 56], 20.00th=[ 59], 00:12:36.006 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 75], 60.00th=[ 86], 00:12:36.006 | 70.00th=[ 104], 80.00th=[ 123], 90.00th=[ 153], 95.00th=[ 167], 00:12:36.006 | 99.00th=[ 230], 99.50th=[ 243], 99.90th=[ 245], 99.95th=[ 245], 00:12:36.006 | 99.99th=[ 245] 00:12:36.006 bw ( KiB/s): min= 3072, max=22060, per=0.83%, avg=10289.85, stdev=5179.32, samples=20 00:12:36.006 iops : min= 24, max= 172, avg=80.20, stdev=40.49, samples=20 00:12:36.006 lat (msec) : 10=36.81%, 20=12.97%, 50=2.30%, 100=32.28%, 250=15.64% 00:12:36.006 cpu : usr=0.54%, sys=0.35%, ctx=2559, majf=0, minf=1 00:12:36.006 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.006 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.006 issued rwts: total=800,811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.006 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.006 job90: (groupid=0, jobs=1): err= 0: pid=81879: Tue Jul 23 05:03:36 2024 00:12:36.006 read: IOPS=74, BW=9591KiB/s (9822kB/s)(80.0MiB/8541msec) 00:12:36.006 slat (usec): min=6, max=1523, avg=59.30, stdev=118.45 00:12:36.006 clat (msec): min=3, max=154, avg=20.71, stdev=22.43 00:12:36.006 lat (msec): min=3, max=154, avg=20.77, stdev=22.43 00:12:36.006 clat percentiles (msec): 00:12:36.006 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:12:36.006 | 30.00th=[ 10], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 18], 00:12:36.006 | 70.00th=[ 20], 80.00th=[ 25], 90.00th=[ 34], 95.00th=[ 41], 00:12:36.006 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:12:36.006 | 99.99th=[ 155] 00:12:36.006 write: IOPS=84, BW=10.6MiB/s (11.1MB/s)(89.0MiB/8389msec); 0 zone resets 00:12:36.006 slat (usec): min=38, max=2285, avg=166.53, stdev=232.31 00:12:36.006 clat (msec): min=40, max=339, avg=93.17, stdev=40.45 00:12:36.006 lat (msec): min=40, max=339, avg=93.34, stdev=40.45 00:12:36.006 clat percentiles (msec): 00:12:36.006 | 1.00th=[ 47], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 65], 00:12:36.006 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 91], 00:12:36.006 | 70.00th=[ 103], 80.00th=[ 115], 90.00th=[ 133], 95.00th=[ 163], 00:12:36.006 | 99.00th=[ 255], 99.50th=[ 271], 99.90th=[ 338], 99.95th=[ 338], 00:12:36.006 | 99.99th=[ 338] 00:12:36.006 bw ( KiB/s): min= 1792, max=16128, per=0.72%, avg=9007.10, stdev=4545.20, samples=20 00:12:36.006 iops : min= 14, max= 126, avg=70.20, stdev=35.48, samples=20 00:12:36.006 lat (msec) : 4=0.07%, 10=14.20%, 20=19.53%, 50=11.98%, 100=35.87% 00:12:36.006 lat (msec) : 250=17.53%, 500=0.81% 00:12:36.006 cpu : usr=0.52%, sys=0.25%, ctx=2337, majf=0, minf=1 00:12:36.006 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.006 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.006 issued rwts: total=640,712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.006 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.006 job91: (groupid=0, jobs=1): err= 0: pid=81880: Tue Jul 23 05:03:36 2024 00:12:36.006 read: IOPS=75, BW=9726KiB/s (9959kB/s)(80.0MiB/8423msec) 00:12:36.006 slat (usec): min=6, max=2585, avg=76.20, stdev=160.90 00:12:36.006 clat (usec): min=8202, max=65953, avg=17845.58, stdev=7692.44 00:12:36.006 lat (usec): min=8341, max=65962, avg=17921.78, stdev=7690.96 00:12:36.006 clat percentiles (usec): 00:12:36.006 | 1.00th=[ 8717], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11731], 00:12:36.006 | 30.00th=[13698], 40.00th=[15270], 50.00th=[16450], 60.00th=[17171], 00:12:36.006 | 70.00th=[18744], 80.00th=[21103], 90.00th=[28443], 95.00th=[33424], 00:12:36.006 | 99.00th=[48497], 99.50th=[50070], 99.90th=[65799], 99.95th=[65799], 00:12:36.006 | 99.99th=[65799] 00:12:36.006 write: IOPS=92, BW=11.5MiB/s (12.1MB/s)(99.4MiB/8609msec); 0 zone resets 00:12:36.006 slat (usec): min=39, max=3340, avg=159.19, stdev=249.89 00:12:36.006 clat (msec): min=54, max=275, avg=85.77, stdev=34.29 00:12:36.006 lat (msec): min=54, max=275, avg=85.93, stdev=34.28 00:12:36.006 clat percentiles (msec): 00:12:36.006 | 1.00th=[ 57], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 64], 00:12:36.006 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 81], 00:12:36.006 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 121], 95.00th=[ 163], 00:12:36.006 | 99.00th=[ 239], 99.50th=[ 251], 99.90th=[ 275], 99.95th=[ 275], 00:12:36.006 | 99.99th=[ 275] 00:12:36.006 bw ( KiB/s): min= 1788, max=15104, per=0.81%, avg=10083.85, stdev=4636.12, samples=20 00:12:36.006 iops : min= 13, max= 118, avg=78.65, stdev=36.29, samples=20 00:12:36.006 lat (msec) : 10=3.90%, 20=29.69%, 50=10.59%, 100=44.95%, 250=10.59% 00:12:36.006 lat (msec) : 500=0.28% 00:12:36.006 cpu : usr=0.52%, sys=0.32%, ctx=2516, majf=0, minf=5 00:12:36.006 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.006 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.006 issued rwts: total=640,795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.006 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.006 job92: (groupid=0, jobs=1): err= 0: pid=81881: Tue Jul 23 05:03:36 2024 00:12:36.006 read: IOPS=76, BW=9825KiB/s (10.1MB/s)(80.0MiB/8338msec) 00:12:36.006 slat (usec): min=7, max=1077, avg=60.36, stdev=109.90 00:12:36.006 clat (msec): min=3, max=171, avg=17.72, stdev=18.86 00:12:36.006 lat (msec): min=3, max=172, avg=17.78, stdev=18.87 00:12:36.006 clat percentiles (msec): 00:12:36.006 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:12:36.006 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 14], 60.00th=[ 15], 00:12:36.006 | 70.00th=[ 17], 80.00th=[ 22], 90.00th=[ 33], 95.00th=[ 46], 00:12:36.006 | 99.00th=[ 138], 99.50th=[ 153], 99.90th=[ 171], 99.95th=[ 171], 00:12:36.007 | 99.99th=[ 171] 00:12:36.007 write: IOPS=77, BW=9886KiB/s (10.1MB/s)(83.2MiB/8623msec); 0 zone resets 00:12:36.007 slat (usec): min=31, max=3943, avg=142.38, stdev=224.53 00:12:36.007 clat (msec): min=47, max=343, avg=102.77, stdev=47.02 00:12:36.007 lat (msec): min=47, max=343, avg=102.91, stdev=47.01 00:12:36.007 clat percentiles (msec): 00:12:36.007 | 1.00th=[ 53], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 66], 00:12:36.007 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 91], 60.00th=[ 103], 00:12:36.007 | 70.00th=[ 113], 80.00th=[ 126], 90.00th=[ 165], 95.00th=[ 203], 00:12:36.007 | 99.00th=[ 279], 99.50th=[ 300], 99.90th=[ 342], 99.95th=[ 342], 00:12:36.007 | 99.99th=[ 342] 00:12:36.007 bw ( KiB/s): min= 1280, max=14336, per=0.68%, avg=8421.15, stdev=4159.07, samples=20 00:12:36.007 iops : min= 10, max= 112, avg=65.65, stdev=32.50, samples=20 00:12:36.007 lat (msec) : 4=0.08%, 10=17.61%, 20=19.60%, 50=10.26%, 100=30.25% 00:12:36.007 lat (msec) : 250=21.29%, 500=0.92% 00:12:36.007 cpu : usr=0.40%, sys=0.34%, ctx=2194, majf=0, minf=10 00:12:36.007 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.007 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.007 issued rwts: total=640,666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.007 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.007 job93: (groupid=0, jobs=1): err= 0: pid=81882: Tue Jul 23 05:03:36 2024 00:12:36.007 read: IOPS=78, BW=9.81MiB/s (10.3MB/s)(80.0MiB/8155msec) 00:12:36.007 slat (usec): min=6, max=1683, avg=65.93, stdev=138.05 00:12:36.007 clat (msec): min=3, max=144, avg=17.06, stdev=18.50 00:12:36.007 lat (msec): min=3, max=144, avg=17.13, stdev=18.51 00:12:36.007 clat percentiles (msec): 00:12:36.007 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:12:36.007 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 13], 00:12:36.007 | 70.00th=[ 16], 80.00th=[ 19], 90.00th=[ 32], 95.00th=[ 53], 00:12:36.007 | 99.00th=[ 102], 99.50th=[ 134], 99.90th=[ 144], 99.95th=[ 144], 00:12:36.007 | 99.99th=[ 144] 00:12:36.007 write: IOPS=78, BW=9.87MiB/s (10.4MB/s)(85.6MiB/8673msec); 0 zone resets 00:12:36.007 slat (usec): min=35, max=3445, avg=158.33, stdev=257.28 00:12:36.007 clat (msec): min=56, max=351, avg=100.31, stdev=48.65 00:12:36.007 lat (msec): min=56, max=351, avg=100.47, stdev=48.64 00:12:36.007 clat percentiles (msec): 00:12:36.007 | 1.00th=[ 57], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 65], 00:12:36.007 | 30.00th=[ 71], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 95], 00:12:36.007 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 163], 95.00th=[ 190], 00:12:36.007 | 99.00th=[ 305], 99.50th=[ 338], 99.90th=[ 351], 99.95th=[ 351], 00:12:36.007 | 99.99th=[ 351] 00:12:36.007 bw ( KiB/s): min= 2048, max=16351, per=0.73%, avg=9092.33, stdev=3981.85, samples=18 00:12:36.007 iops : min= 16, max= 127, avg=70.89, stdev=31.05, samples=18 00:12:36.007 lat (msec) : 4=0.08%, 10=19.55%, 20=19.62%, 50=6.19%, 100=34.94% 00:12:36.007 lat (msec) : 250=18.11%, 500=1.51% 00:12:36.007 cpu : usr=0.48%, sys=0.26%, ctx=2294, majf=0, minf=5 00:12:36.007 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.007 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.007 issued rwts: total=640,685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.007 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.007 job94: (groupid=0, jobs=1): err= 0: pid=81883: Tue Jul 23 05:03:36 2024 00:12:36.007 read: IOPS=74, BW=9559KiB/s (9788kB/s)(80.0MiB/8570msec) 00:12:36.007 slat (usec): min=6, max=2265, avg=79.29, stdev=169.15 00:12:36.007 clat (msec): min=8, max=157, avg=20.79, stdev=16.09 00:12:36.007 lat (msec): min=8, max=157, avg=20.87, stdev=16.09 00:12:36.007 clat percentiles (msec): 00:12:36.007 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 12], 00:12:36.007 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 19], 00:12:36.007 | 70.00th=[ 21], 80.00th=[ 26], 90.00th=[ 35], 95.00th=[ 40], 00:12:36.007 | 99.00th=[ 120], 99.50th=[ 140], 99.90th=[ 159], 99.95th=[ 159], 00:12:36.007 | 99.99th=[ 159] 00:12:36.007 write: IOPS=91, BW=11.4MiB/s (12.0MB/s)(96.0MiB/8394msec); 0 zone resets 00:12:36.007 slat (usec): min=30, max=1690, avg=134.74, stdev=194.99 00:12:36.007 clat (msec): min=17, max=306, avg=86.42, stdev=36.55 00:12:36.007 lat (msec): min=17, max=306, avg=86.56, stdev=36.56 00:12:36.007 clat percentiles (msec): 00:12:36.007 | 1.00th=[ 31], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 65], 00:12:36.007 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 81], 00:12:36.007 | 70.00th=[ 91], 80.00th=[ 102], 90.00th=[ 121], 95.00th=[ 165], 00:12:36.007 | 99.00th=[ 251], 99.50th=[ 264], 99.90th=[ 305], 99.95th=[ 305], 00:12:36.007 | 99.99th=[ 305] 00:12:36.007 bw ( KiB/s): min= 1792, max=16384, per=0.83%, avg=10254.00, stdev=4216.29, samples=19 00:12:36.007 iops : min= 14, max= 128, avg=80.05, stdev=32.95, samples=19 00:12:36.007 lat (msec) : 10=4.83%, 20=25.99%, 50=14.42%, 100=42.90%, 250=11.29% 00:12:36.007 lat (msec) : 500=0.57% 00:12:36.007 cpu : usr=0.45%, sys=0.31%, ctx=2393, majf=0, minf=7 00:12:36.007 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.007 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.007 issued rwts: total=640,768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.007 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.007 job95: (groupid=0, jobs=1): err= 0: pid=81884: Tue Jul 23 05:03:36 2024 00:12:36.007 read: IOPS=69, BW=8921KiB/s (9135kB/s)(67.2MiB/7719msec) 00:12:36.007 slat (usec): min=6, max=3207, avg=74.32, stdev=205.50 00:12:36.007 clat (msec): min=3, max=267, avg=25.36, stdev=43.07 00:12:36.007 lat (msec): min=3, max=268, avg=25.43, stdev=43.07 00:12:36.007 clat percentiles (msec): 00:12:36.007 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:12:36.007 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 12], 60.00th=[ 13], 00:12:36.007 | 70.00th=[ 16], 80.00th=[ 22], 90.00th=[ 57], 95.00th=[ 110], 00:12:36.007 | 99.00th=[ 239], 99.50th=[ 268], 99.90th=[ 268], 99.95th=[ 268], 00:12:36.007 | 99.99th=[ 268] 00:12:36.007 write: IOPS=77, BW=9888KiB/s (10.1MB/s)(80.0MiB/8285msec); 0 zone resets 00:12:36.007 slat (usec): min=37, max=2914, avg=162.07, stdev=238.87 00:12:36.007 clat (msec): min=54, max=319, avg=102.92, stdev=43.73 00:12:36.007 lat (msec): min=55, max=319, avg=103.09, stdev=43.74 00:12:36.007 clat percentiles (msec): 00:12:36.007 | 1.00th=[ 57], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 68], 00:12:36.007 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 87], 60.00th=[ 102], 00:12:36.007 | 70.00th=[ 116], 80.00th=[ 132], 90.00th=[ 171], 95.00th=[ 192], 00:12:36.007 | 99.00th=[ 239], 99.50th=[ 264], 99.90th=[ 321], 99.95th=[ 321], 00:12:36.007 | 99.99th=[ 321] 00:12:36.007 bw ( KiB/s): min= 2048, max=15104, per=0.69%, avg=8532.83, stdev=3983.65, samples=18 00:12:36.007 iops : min= 16, max= 118, avg=66.50, stdev=31.21, samples=18 00:12:36.007 lat (msec) : 4=0.17%, 10=18.93%, 20=16.55%, 50=4.84%, 100=34.97% 00:12:36.007 lat (msec) : 250=23.68%, 500=0.85% 00:12:36.007 cpu : usr=0.40%, sys=0.27%, ctx=2005, majf=0, minf=7 00:12:36.007 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.007 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.007 issued rwts: total=538,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.007 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.007 job96: (groupid=0, jobs=1): err= 0: pid=81885: Tue Jul 23 05:03:36 2024 00:12:36.007 read: IOPS=74, BW=9478KiB/s (9706kB/s)(80.0MiB/8643msec) 00:12:36.007 slat (usec): min=6, max=4179, avg=66.28, stdev=208.84 00:12:36.007 clat (usec): min=5950, max=42914, avg=15221.32, stdev=6573.58 00:12:36.007 lat (usec): min=6602, max=43085, avg=15287.59, stdev=6598.51 00:12:36.007 clat percentiles (usec): 00:12:36.007 | 1.00th=[ 7046], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9634], 00:12:36.007 | 30.00th=[10683], 40.00th=[11863], 50.00th=[13173], 60.00th=[15926], 00:12:36.007 | 70.00th=[17695], 80.00th=[19530], 90.00th=[24249], 95.00th=[28443], 00:12:36.007 | 99.00th=[36963], 99.50th=[38536], 99.90th=[42730], 99.95th=[42730], 00:12:36.007 | 99.99th=[42730] 00:12:36.007 write: IOPS=90, BW=11.3MiB/s (11.9MB/s)(99.8MiB/8823msec); 0 zone resets 00:12:36.007 slat (usec): min=38, max=1713, avg=143.08, stdev=176.28 00:12:36.007 clat (msec): min=32, max=251, avg=87.71, stdev=36.29 00:12:36.007 lat (msec): min=32, max=251, avg=87.86, stdev=36.29 00:12:36.007 clat percentiles (msec): 00:12:36.007 | 1.00th=[ 40], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 63], 00:12:36.007 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 81], 00:12:36.007 | 70.00th=[ 89], 80.00th=[ 109], 90.00th=[ 134], 95.00th=[ 174], 00:12:36.007 | 99.00th=[ 226], 99.50th=[ 239], 99.90th=[ 253], 99.95th=[ 253], 00:12:36.007 | 99.99th=[ 253] 00:12:36.007 bw ( KiB/s): min= 1792, max=16160, per=0.86%, avg=10659.37, stdev=4159.41, samples=19 00:12:36.008 iops : min= 14, max= 126, avg=83.26, stdev=32.48, samples=19 00:12:36.008 lat (msec) : 10=10.78%, 20=25.59%, 50=8.69%, 100=42.28%, 250=12.59% 00:12:36.008 lat (msec) : 500=0.07% 00:12:36.008 cpu : usr=0.47%, sys=0.35%, ctx=2403, majf=0, minf=12 00:12:36.008 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.008 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.008 issued rwts: total=640,798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.008 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.008 job97: (groupid=0, jobs=1): err= 0: pid=81886: Tue Jul 23 05:03:36 2024 00:12:36.008 read: IOPS=72, BW=9240KiB/s (9462kB/s)(80.0MiB/8866msec) 00:12:36.008 slat (usec): min=6, max=1486, avg=54.52, stdev=116.27 00:12:36.008 clat (msec): min=4, max=107, avg=13.51, stdev=11.15 00:12:36.008 lat (msec): min=4, max=107, avg=13.56, stdev=11.15 00:12:36.008 clat percentiles (msec): 00:12:36.008 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:12:36.008 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:12:36.008 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 21], 95.00th=[ 25], 00:12:36.008 | 99.00th=[ 99], 99.50th=[ 104], 99.90th=[ 108], 99.95th=[ 108], 00:12:36.008 | 99.99th=[ 108] 00:12:36.008 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(99.4MiB/8990msec); 0 zone resets 00:12:36.008 slat (usec): min=39, max=3034, avg=155.94, stdev=235.70 00:12:36.008 clat (msec): min=2, max=308, avg=89.85, stdev=40.61 00:12:36.008 lat (msec): min=2, max=308, avg=90.00, stdev=40.62 00:12:36.008 clat percentiles (msec): 00:12:36.008 | 1.00th=[ 10], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 64], 00:12:36.008 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 85], 00:12:36.008 | 70.00th=[ 97], 80.00th=[ 115], 90.00th=[ 136], 95.00th=[ 171], 00:12:36.008 | 99.00th=[ 259], 99.50th=[ 271], 99.90th=[ 309], 99.95th=[ 309], 00:12:36.008 | 99.99th=[ 309] 00:12:36.008 bw ( KiB/s): min= 2052, max=20480, per=0.81%, avg=10072.05, stdev=4656.07, samples=20 00:12:36.008 iops : min= 16, max= 160, avg=78.60, stdev=36.41, samples=20 00:12:36.008 lat (msec) : 4=0.14%, 10=18.95%, 20=21.39%, 50=5.44%, 100=38.75% 00:12:36.008 lat (msec) : 250=14.77%, 500=0.56% 00:12:36.008 cpu : usr=0.56%, sys=0.28%, ctx=2424, majf=0, minf=3 00:12:36.008 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.008 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.008 issued rwts: total=640,795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.008 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.008 job98: (groupid=0, jobs=1): err= 0: pid=81887: Tue Jul 23 05:03:36 2024 00:12:36.008 read: IOPS=72, BW=9330KiB/s (9554kB/s)(80.0MiB/8780msec) 00:12:36.008 slat (usec): min=6, max=2390, avg=62.65, stdev=152.40 00:12:36.008 clat (usec): min=5719, max=41781, avg=12900.18, stdev=4288.12 00:12:36.008 lat (usec): min=5747, max=41797, avg=12962.83, stdev=4286.11 00:12:36.008 clat percentiles (usec): 00:12:36.008 | 1.00th=[ 6652], 5.00th=[ 7242], 10.00th=[ 8586], 20.00th=[ 9896], 00:12:36.008 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11731], 60.00th=[12780], 00:12:36.008 | 70.00th=[14222], 80.00th=[15795], 90.00th=[18482], 95.00th=[20055], 00:12:36.008 | 99.00th=[26346], 99.50th=[27132], 99.90th=[41681], 99.95th=[41681], 00:12:36.008 | 99.99th=[41681] 00:12:36.008 write: IOPS=85, BW=10.7MiB/s (11.3MB/s)(97.1MiB/9045msec); 0 zone resets 00:12:36.008 slat (usec): min=33, max=3077, avg=155.27, stdev=226.28 00:12:36.008 clat (msec): min=15, max=358, avg=91.70, stdev=44.82 00:12:36.008 lat (msec): min=15, max=358, avg=91.85, stdev=44.82 00:12:36.008 clat percentiles (msec): 00:12:36.008 | 1.00th=[ 28], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 64], 00:12:36.008 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 81], 00:12:36.008 | 70.00th=[ 92], 80.00th=[ 111], 90.00th=[ 148], 95.00th=[ 182], 00:12:36.008 | 99.00th=[ 275], 99.50th=[ 309], 99.90th=[ 359], 99.95th=[ 359], 00:12:36.008 | 99.99th=[ 359] 00:12:36.008 bw ( KiB/s): min= 1792, max=16896, per=0.79%, avg=9843.65, stdev=4644.94, samples=20 00:12:36.008 iops : min= 14, max= 132, avg=76.85, stdev=36.28, samples=20 00:12:36.008 lat (msec) : 10=9.81%, 20=33.24%, 50=3.25%, 100=39.38%, 250=13.55% 00:12:36.008 lat (msec) : 500=0.78% 00:12:36.008 cpu : usr=0.48%, sys=0.30%, ctx=2404, majf=0, minf=1 00:12:36.008 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.008 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.008 issued rwts: total=640,777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.008 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.008 job99: (groupid=0, jobs=1): err= 0: pid=81888: Tue Jul 23 05:03:36 2024 00:12:36.008 read: IOPS=69, BW=8912KiB/s (9126kB/s)(80.0MiB/9192msec) 00:12:36.008 slat (usec): min=7, max=1546, avg=69.04, stdev=140.33 00:12:36.008 clat (msec): min=4, max=180, avg=21.67, stdev=25.44 00:12:36.008 lat (msec): min=4, max=180, avg=21.74, stdev=25.43 00:12:36.008 clat percentiles (msec): 00:12:36.008 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:12:36.008 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 14], 60.00th=[ 16], 00:12:36.008 | 70.00th=[ 20], 80.00th=[ 31], 90.00th=[ 46], 95.00th=[ 57], 00:12:36.008 | 99.00th=[ 161], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], 00:12:36.008 | 99.99th=[ 180] 00:12:36.008 write: IOPS=91, BW=11.4MiB/s (12.0MB/s)(95.2MiB/8343msec); 0 zone resets 00:12:36.008 slat (usec): min=32, max=2510, avg=135.90, stdev=197.91 00:12:36.008 clat (msec): min=2, max=274, avg=86.85, stdev=46.34 00:12:36.008 lat (msec): min=2, max=274, avg=86.99, stdev=46.36 00:12:36.008 clat percentiles (msec): 00:12:36.008 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 58], 20.00th=[ 62], 00:12:36.008 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 77], 60.00th=[ 84], 00:12:36.008 | 70.00th=[ 95], 80.00th=[ 112], 90.00th=[ 144], 95.00th=[ 190], 00:12:36.008 | 99.00th=[ 239], 99.50th=[ 257], 99.90th=[ 275], 99.95th=[ 275], 00:12:36.008 | 99.99th=[ 275] 00:12:36.008 bw ( KiB/s): min= 256, max=30720, per=0.82%, avg=10152.42, stdev=6445.50, samples=19 00:12:36.008 iops : min= 2, max= 240, avg=79.05, stdev=50.36, samples=19 00:12:36.008 lat (msec) : 4=0.93%, 10=17.62%, 20=19.04%, 50=9.42%, 100=36.73% 00:12:36.008 lat (msec) : 250=15.98%, 500=0.29% 00:12:36.008 cpu : usr=0.45%, sys=0.35%, ctx=2343, majf=0, minf=3 00:12:36.008 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.008 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.008 issued rwts: total=640,762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.008 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:36.008 00:12:36.008 Run status group 0 (all jobs): 00:12:36.008 READ: bw=1041MiB/s (1091MB/s), 8730KiB/s-15.6MiB/s (8939kB/s-16.4MB/s), io=9957MiB (10.4GB), run=7526-9567msec 00:12:36.008 WRITE: bw=1213MiB/s (1272MB/s), 9189KiB/s-17.3MiB/s (9410kB/s-18.1MB/s), io=10.8GiB (11.6GB), run=7909-9085msec 00:12:36.008 00:12:36.008 Disk stats (read/write): 00:12:36.008 sdb: ios=724/800, merge=0/0, ticks=11735/65398, in_queue=77134, util=71.99% 00:12:36.008 sdd: ios=674/687, merge=0/0, ticks=8224/68238, in_queue=76463, util=71.59% 00:12:36.008 sde: ios=676/794, merge=0/0, ticks=9820/66771, in_queue=76592, util=72.28% 00:12:36.008 sdj: ios=611/640, merge=0/0, ticks=15819/61569, in_queue=77388, util=72.44% 00:12:36.008 sdn: ios=640/644, merge=0/0, ticks=13737/62900, in_queue=76638, util=72.74% 00:12:36.008 sdr: ios=641/737, merge=0/0, ticks=9198/67745, in_queue=76943, util=73.19% 00:12:36.008 sdv: ios=775/800, merge=0/0, ticks=12403/64601, in_queue=77005, util=74.09% 00:12:36.008 sdx: ios=641/760, merge=0/0, ticks=10319/66112, in_queue=76431, util=74.04% 00:12:36.008 sdaa: ios=641/750, merge=0/0, ticks=11460/63978, in_queue=75438, util=74.19% 00:12:36.008 sdad: ios=641/785, merge=0/0, ticks=9906/66443, in_queue=76350, util=75.09% 00:12:36.008 sdi: ios=962/1044, merge=0/0, ticks=11092/65813, in_queue=76905, util=75.19% 00:12:36.008 sdl: ios=962/1114, merge=0/0, ticks=9685/66821, in_queue=76506, util=75.63% 00:12:36.008 sdp: ios=961/1001, merge=0/0, ticks=11956/65383, in_queue=77340, util=75.90% 00:12:36.008 sds: ios=961/989, merge=0/0, ticks=8497/68569, in_queue=77066, util=76.05% 00:12:36.008 sdz: ios=1157/1138, merge=0/0, ticks=9898/67707, in_queue=77606, util=76.72% 00:12:36.009 sdaf: ios=1122/1124, merge=0/0, ticks=9355/66632, in_queue=75988, util=76.97% 00:12:36.009 sdaj: ios=962/1048, merge=0/0, ticks=11952/64975, in_queue=76928, util=77.25% 00:12:36.009 sdal: ios=962/1076, merge=0/0, ticks=12036/65143, in_queue=77179, util=77.35% 00:12:36.009 sdam: ios=962/1007, merge=0/0, ticks=12041/64413, in_queue=76454, util=77.05% 00:12:36.009 sdan: ios=1160/1120, merge=0/0, ticks=10490/66816, in_queue=77306, util=77.28% 00:12:36.009 sdh: ios=1158/1122, merge=0/0, ticks=11302/65288, in_queue=76590, util=77.69% 00:12:36.009 sdk: ios=962/1014, merge=0/0, ticks=12510/64578, in_queue=77089, util=77.81% 00:12:36.009 sdo: ios=997/1117, merge=0/0, ticks=13202/64101, in_queue=77303, util=78.18% 00:12:36.009 sdt: ios=1157/1153, merge=0/0, ticks=11238/66409, in_queue=77648, util=78.80% 00:12:36.009 sdw: ios=972/1120, merge=0/0, ticks=8418/67961, in_queue=76379, util=78.35% 00:12:36.009 sdac: ios=961/974, merge=0/0, ticks=12185/64757, in_queue=76943, util=78.34% 00:12:36.009 sdae: ios=962/1020, merge=0/0, ticks=11250/65968, in_queue=77219, util=78.30% 00:12:36.009 sdah: ios=1122/1151, merge=0/0, ticks=10571/66777, in_queue=77348, util=79.18% 00:12:36.009 sdai: ios=962/1060, merge=0/0, ticks=11432/65150, in_queue=76582, util=79.01% 00:12:36.009 sdak: ios=962/1058, merge=0/0, ticks=12692/63917, in_queue=76609, util=78.82% 00:12:36.009 sdap: ios=641/730, merge=0/0, ticks=11997/64612, in_queue=76610, util=79.33% 00:12:36.009 sdar: ios=643/761, merge=0/0, ticks=10753/65641, in_queue=76394, util=79.73% 00:12:36.009 sdau: ios=554/640, merge=0/0, ticks=12150/65115, in_queue=77265, util=79.68% 00:12:36.009 sdav: ios=679/781, merge=0/0, ticks=12447/65464, in_queue=77912, util=80.58% 00:12:36.009 sday: ios=481/640, merge=0/0, ticks=7395/70124, in_queue=77519, util=80.40% 00:12:36.009 sdbb: ios=642/785, merge=0/0, ticks=10114/67146, in_queue=77261, util=80.50% 00:12:36.009 sdbd: ios=641/739, merge=0/0, ticks=12030/64457, in_queue=76488, util=80.84% 00:12:36.009 sdbf: ios=641/745, merge=0/0, ticks=11401/64713, in_queue=76114, util=81.17% 00:12:36.009 sdbg: ios=641/776, merge=0/0, ticks=10208/66009, in_queue=76217, util=81.25% 00:12:36.009 sdbh: ios=641/658, merge=0/0, ticks=12850/63828, in_queue=76678, util=81.68% 00:12:36.009 sdao: ios=642/773, merge=0/0, ticks=9375/67039, in_queue=76414, util=81.96% 00:12:36.009 sdaq: ios=642/798, merge=0/0, ticks=13379/63222, in_queue=76602, util=82.06% 00:12:36.009 sdas: ios=641/753, merge=0/0, ticks=9067/67096, in_queue=76163, util=82.65% 00:12:36.009 sdat: ios=701/800, merge=0/0, ticks=11441/65534, in_queue=76976, util=82.89% 00:12:36.009 sdaw: ios=640/648, merge=0/0, ticks=12321/64196, in_queue=76518, util=82.96% 00:12:36.009 sdax: ios=661/800, merge=0/0, ticks=10007/65263, in_queue=75271, util=82.91% 00:12:36.009 sdaz: ios=641/758, merge=0/0, ticks=7435/68424, in_queue=75860, util=83.38% 00:12:36.009 sdba: ios=640/653, merge=0/0, ticks=12750/63779, in_queue=76530, util=83.68% 00:12:36.009 sdbc: ios=641/763, merge=0/0, ticks=8323/67693, in_queue=76016, util=83.57% 00:12:36.009 sdbe: ios=640/642, merge=0/0, ticks=11318/64343, in_queue=75662, util=83.80% 00:12:36.009 sdbi: ios=977/1120, merge=0/0, ticks=8205/68086, in_queue=76292, util=84.17% 00:12:36.009 sdbk: ios=962/1025, merge=0/0, ticks=8644/68046, in_queue=76690, util=83.48% 00:12:36.009 sdbn: ios=1010/1120, merge=0/0, ticks=8584/68122, in_queue=76706, util=84.76% 00:12:36.009 sdbs: ios=1156/1123, merge=0/0, ticks=10962/66022, in_queue=76984, util=84.18% 00:12:36.009 sdbw: ios=962/1092, merge=0/0, ticks=8150/68725, in_queue=76875, util=84.72% 00:12:36.009 sdcc: ios=962/968, merge=0/0, ticks=13062/63990, in_queue=77052, util=84.54% 00:12:36.009 sdcf: ios=962/996, merge=0/0, ticks=9601/67500, in_queue=77102, util=85.48% 00:12:36.009 sdci: ios=1110/1120, merge=0/0, ticks=12367/64829, in_queue=77196, util=85.75% 00:12:36.009 sdcm: ios=962/1067, merge=0/0, ticks=9034/67996, in_queue=77030, util=86.19% 00:12:36.009 sdcq: ios=962/1068, merge=0/0, ticks=9185/67955, in_queue=77140, util=86.52% 00:12:36.009 sdbj: ios=962/1006, merge=0/0, ticks=10573/66490, in_queue=77064, util=86.10% 00:12:36.009 sdbm: ios=1100/1120, merge=0/0, ticks=9466/67213, in_queue=76679, util=86.92% 00:12:36.009 sdbo: ios=1156/1140, merge=0/0, ticks=13452/63439, in_queue=76892, util=87.94% 00:12:36.009 sdbq: ios=964/1120, merge=0/0, ticks=9137/67756, in_queue=76893, util=87.64% 00:12:36.009 sdbu: ios=962/986, merge=0/0, ticks=13064/63170, in_queue=76234, util=87.62% 00:12:36.009 sdby: ios=962/1099, merge=0/0, ticks=9451/67441, in_queue=76892, util=87.94% 00:12:36.009 sdbz: ios=1122/1147, merge=0/0, ticks=12388/63920, in_queue=76308, util=88.39% 00:12:36.009 sdcd: ios=962/1060, merge=0/0, ticks=11051/65541, in_queue=76592, util=88.65% 00:12:36.009 sdcg: ios=962/1066, merge=0/0, ticks=9076/67754, in_queue=76830, util=88.87% 00:12:36.009 sdck: ios=961/969, merge=0/0, ticks=12278/63961, in_queue=76239, util=88.68% 00:12:36.009 sdbl: ios=641/650, merge=0/0, ticks=12297/63180, in_queue=75477, util=89.33% 00:12:36.009 sdbt: ios=640/757, merge=0/0, ticks=8021/67910, in_queue=75931, util=89.51% 00:12:36.009 sdbx: ios=641/752, merge=0/0, ticks=12827/62754, in_queue=75582, util=89.87% 00:12:36.009 sdcb: ios=640/706, merge=0/0, ticks=10182/65611, in_queue=75794, util=90.40% 00:12:36.009 sdch: ios=641/776, merge=0/0, ticks=11716/64389, in_queue=76106, util=90.78% 00:12:36.009 sdcl: ios=642/793, merge=0/0, ticks=13556/64215, in_queue=77772, util=91.40% 00:12:36.009 sdco: ios=640/737, merge=0/0, ticks=8257/68365, in_queue=76623, util=91.59% 00:12:36.009 sdcr: ios=640/649, merge=0/0, ticks=8097/68975, in_queue=77073, util=91.87% 00:12:36.009 sdcu: ios=641/764, merge=0/0, ticks=11739/64845, in_queue=76584, util=92.93% 00:12:36.009 sdcv: ios=508/640, merge=0/0, ticks=7084/70188, in_queue=77272, util=92.47% 00:12:36.009 sdbp: ios=691/800, merge=0/0, ticks=11294/64932, in_queue=76226, util=93.25% 00:12:36.009 sdbr: ios=622/640, merge=0/0, ticks=11832/65440, in_queue=77273, util=93.51% 00:12:36.009 sdbv: ios=641/759, merge=0/0, ticks=9803/66393, in_queue=76197, util=93.92% 00:12:36.009 sdca: ios=640/678, merge=0/0, ticks=9546/66656, in_queue=76202, util=94.29% 00:12:36.009 sdce: ios=640/648, merge=0/0, ticks=13410/63420, in_queue=76830, util=94.36% 00:12:36.009 sdcj: ios=642/791, merge=0/0, ticks=9354/66922, in_queue=76276, util=94.75% 00:12:36.009 sdcn: ios=641/767, merge=0/0, ticks=12460/64033, in_queue=76494, util=95.43% 00:12:36.009 sdcp: ios=746/800, merge=0/0, ticks=9325/67502, in_queue=76828, util=95.80% 00:12:36.009 sdcs: ios=641/684, merge=0/0, ticks=13246/63096, in_queue=76342, util=95.54% 00:12:36.009 sdct: ios=802/801, merge=0/0, ticks=7511/69026, in_queue=76538, util=96.43% 00:12:36.009 sda: ios=641/694, merge=0/0, ticks=13094/62626, in_queue=75720, util=96.43% 00:12:36.009 sdc: ios=641/775, merge=0/0, ticks=11205/65066, in_queue=76271, util=96.88% 00:12:36.009 sdf: ios=640/649, merge=0/0, ticks=11161/65713, in_queue=76874, util=96.98% 00:12:36.009 sdg: ios=640/664, merge=0/0, ticks=10784/65194, in_queue=75978, util=97.29% 00:12:36.009 sdm: ios=641/753, merge=0/0, ticks=13111/63423, in_queue=76534, util=97.79% 00:12:36.009 sdq: ios=480/639, merge=0/0, ticks=12572/64912, in_queue=77485, util=97.52% 00:12:36.009 sdu: ios=641/784, merge=0/0, ticks=9549/66761, in_queue=76310, util=97.50% 00:12:36.009 sdy: ios=642/780, merge=0/0, ticks=8471/69237, in_queue=77708, util=98.18% 00:12:36.009 sdab: ios=642/762, merge=0/0, ticks=8038/68887, in_queue=76925, util=97.58% 00:12:36.009 sdag: ios=642/751, merge=0/0, ticks=13614/63391, in_queue=77005, util=98.63% 00:12:36.009 [2024-07-23 05:03:36.036926] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.009 [2024-07-23 05:03:36.038822] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.009 [2024-07-23 05:03:36.040782] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.009 [2024-07-23 05:03:36.042621] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.009 [2024-07-23 05:03:36.044553] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.009 [2024-07-23 05:03:36.046718] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.009 [2024-07-23 05:03:36.048866] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.009 [2024-07-23 05:03:36.052448] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.009 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@78 -- # timing_exit fio 00:12:36.010 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:36.010 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:36.010 [2024-07-23 05:03:36.054869] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.056733] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.059157] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.062061] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.064740] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.067005] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.070024] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.072188] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.074128] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.076304] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.078223] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@80 -- # rm -f ./local-job0-0-verify.state 00:12:36.010 [2024-07-23 05:03:36.080099] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:12:36.010 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@83 -- # rm -f 00:12:36.010 [2024-07-23 05:03:36.082055] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.083977] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@84 -- # iscsicleanup 00:12:36.010 Cleaning up iSCSI connection 00:12:36.010 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:12:36.010 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:12:36.010 [2024-07-23 05:03:36.085851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.087546] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.089927] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.092413] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.094438] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.099220] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.101712] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.104274] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.106872] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.109157] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.111362] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.114623] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.116993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.119086] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.150223] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.152491] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.154512] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.157553] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.164206] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.172264] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.178959] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.183265] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.185817] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.010 [2024-07-23 05:03:36.188724] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.202670] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.207004] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.209771] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.212489] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.218679] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.220589] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.222552] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.224877] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.229602] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.233761] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.237738] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.241020] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.246582] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.248571] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.251901] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.255998] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.257834] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.260348] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.266891] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.296242] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.302261] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.307116] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.269 [2024-07-23 05:03:36.310595] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:36.835 Logging out of session [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:12:36.835 Logging out of session [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:36.835 Logging out of session [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:12:36.835 Logging out of session [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:12:36.835 Logging out of session [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:12:36.835 Logging out of session [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:12:36.835 Logging out of session [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:12:36.835 Logging out of session [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:12:36.835 Logging out of session [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:12:36.835 Logging out of session [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:12:36.835 Logout of [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:12:36.835 Logout of [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:36.835 Logout of [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:12:36.835 Logout of [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:12:36.835 Logout of [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:12:36.835 Logout of [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:12:36.835 Logout of [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:12:36.835 Logout of [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:12:36.835 Logout of [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:12:36.835 Logout of [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@983 -- # rm -rf 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@85 -- # killprocess 78785 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@948 -- # '[' -z 78785 ']' 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@952 -- # kill -0 78785 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # uname 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78785 00:12:36.835 killing process with pid 78785 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78785' 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@967 -- # kill 78785 00:12:36.835 05:03:36 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@972 -- # wait 78785 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@86 -- # iscsitestfini 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:37.769 00:12:37.769 real 1m1.531s 00:12:37.769 user 4m15.925s 00:12:37.769 sys 0m26.996s 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.769 ************************************ 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:37.769 END TEST iscsi_tgt_iscsi_lvol 00:12:37.769 ************************************ 00:12:37.769 05:03:37 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:12:37.769 05:03:37 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@37 -- # run_test iscsi_tgt_fio /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:12:37.769 05:03:37 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:37.769 05:03:37 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.769 05:03:37 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:37.769 ************************************ 00:12:37.769 START TEST iscsi_tgt_fio 00:12:37.769 ************************************ 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:12:37.769 * Looking for test storage... 00:12:37.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:37.769 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@11 -- # iscsitestinit 00:12:37.770 05:03:37 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:37.770 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@48 -- # '[' -z 10.0.0.1 ']' 00:12:37.770 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@53 -- # '[' -z 10.0.0.2 ']' 00:12:37.770 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@58 -- # MALLOC_BDEV_SIZE=64 00:12:37.770 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@59 -- # MALLOC_BLOCK_SIZE=4096 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@60 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@61 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@63 -- # timing_enter start_iscsi_tgt 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@66 -- # pid=83498 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@65 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:12:38.028 Process pid: 83498 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@67 -- # echo 'Process pid: 83498' 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@69 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@71 -- # waitforlisten 83498 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@829 -- # '[' -z 83498 ']' 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.028 05:03:37 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:12:38.028 [2024-07-23 05:03:38.059882] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:12:38.028 [2024-07-23 05:03:38.059987] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83498 ] 00:12:38.028 [2024-07-23 05:03:38.196297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.286 [2024-07-23 05:03:38.297218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.852 05:03:38 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.852 05:03:38 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@862 -- # return 0 00:12:38.852 05:03:38 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:12:39.419 iscsi_tgt is listening. Running tests... 00:12:39.419 05:03:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@75 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:39.419 05:03:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@77 -- # timing_exit start_iscsi_tgt 00:12:39.419 05:03:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.419 05:03:39 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:12:39.419 05:03:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:12:39.678 05:03:39 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:39.936 05:03:40 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:12:40.194 05:03:40 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # malloc_bdevs='Malloc0 ' 00:12:40.194 05:03:40 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:12:40.451 05:03:40 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # malloc_bdevs+=Malloc1 00:12:40.451 05:03:40 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:40.708 05:03:40 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 1024 512 00:12:41.274 05:03:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # bdev=Malloc2 00:12:41.274 05:03:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias 'raid0:0 Malloc2:1' 1:2 64 -d 00:12:41.532 05:03:41 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@91 -- # sleep 1 00:12:42.465 05:03:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@93 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:42.465 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:12:42.465 05:03:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@94 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:42.723 [2024-07-23 05:03:42.684893] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:42.723 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:42.723 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:42.723 05:03:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@95 -- # waitforiscsidevices 2 00:12:42.723 05:03:42 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@116 -- # local num=2 00:12:42.723 05:03:42 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:42.723 05:03:42 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:42.723 05:03:42 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:42.723 05:03:42 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:42.723 [2024-07-23 05:03:42.702328] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:42.723 05:03:42 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # n=2 00:12:42.723 05:03:42 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@120 -- # '[' 2 -ne 2 ']' 00:12:42.723 05:03:42 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@123 -- # return 0 00:12:42.723 05:03:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@97 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; delete_tmp_files; exit 1' SIGINT SIGTERM EXIT 00:12:42.723 05:03:42 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:12:42.723 [global] 00:12:42.723 thread=1 00:12:42.723 invalidate=1 00:12:42.723 rw=randrw 00:12:42.723 time_based=1 00:12:42.723 runtime=1 00:12:42.723 ioengine=libaio 00:12:42.723 direct=1 00:12:42.723 bs=4096 00:12:42.723 iodepth=1 00:12:42.723 norandommap=0 00:12:42.723 numjobs=1 00:12:42.723 00:12:42.723 verify_dump=1 00:12:42.723 verify_backlog=512 00:12:42.723 verify_state_save=0 00:12:42.723 do_verify=1 00:12:42.723 verify=crc32c-intel 00:12:42.723 [job0] 00:12:42.723 filename=/dev/sda 00:12:42.723 [job1] 00:12:42.723 filename=/dev/sdb 00:12:42.723 queue_depth set to 113 (sda) 00:12:42.723 queue_depth set to 113 (sdb) 00:12:42.723 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:42.723 job1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:42.723 fio-3.35 00:12:42.723 Starting 2 threads 00:12:42.723 [2024-07-23 05:03:42.935659] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:42.723 [2024-07-23 05:03:42.938509] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:44.129 [2024-07-23 05:03:44.048862] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:44.129 [2024-07-23 05:03:44.050886] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:44.129 00:12:44.129 job0: (groupid=0, jobs=1): err= 0: pid=83643: Tue Jul 23 05:03:44 2024 00:12:44.129 read: IOPS=4799, BW=18.7MiB/s (19.7MB/s)(18.8MiB/1001msec) 00:12:44.129 slat (nsec): min=3181, max=57546, avg=6919.29, stdev=3025.86 00:12:44.129 clat (usec): min=69, max=421, avg=126.71, stdev=25.33 00:12:44.129 lat (usec): min=75, max=428, avg=133.63, stdev=25.94 00:12:44.129 clat percentiles (usec): 00:12:44.129 | 1.00th=[ 82], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 102], 00:12:44.129 | 30.00th=[ 111], 40.00th=[ 119], 50.00th=[ 127], 60.00th=[ 133], 00:12:44.129 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 159], 95.00th=[ 169], 00:12:44.129 | 99.00th=[ 194], 99.50th=[ 202], 99.90th=[ 241], 99.95th=[ 351], 00:12:44.129 | 99.99th=[ 420] 00:12:44.129 bw ( KiB/s): min= 9944, max= 9944, per=26.71%, avg=9944.00, stdev= 0.00, samples=1 00:12:44.129 iops : min= 2486, max= 2486, avg=2486.00, stdev= 0.00, samples=1 00:12:44.129 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:44.129 slat (nsec): min=4047, max=56434, avg=8276.22, stdev=3677.62 00:12:44.129 clat (usec): min=70, max=703, avg=128.18, stdev=32.70 00:12:44.129 lat (usec): min=77, max=709, avg=136.46, stdev=33.31 00:12:44.129 clat percentiles (usec): 00:12:44.129 | 1.00th=[ 76], 5.00th=[ 80], 10.00th=[ 85], 20.00th=[ 98], 00:12:44.129 | 30.00th=[ 113], 40.00th=[ 122], 50.00th=[ 129], 60.00th=[ 135], 00:12:44.129 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 167], 95.00th=[ 182], 00:12:44.129 | 99.00th=[ 208], 99.50th=[ 229], 99.90th=[ 255], 99.95th=[ 330], 00:12:44.129 | 99.99th=[ 701] 00:12:44.129 bw ( KiB/s): min=10264, max=10264, per=50.17%, avg=10264.00, stdev= 0.00, samples=1 00:12:44.129 iops : min= 2566, max= 2566, avg=2566.00, stdev= 0.00, samples=1 00:12:44.129 lat (usec) : 100=18.54%, 250=81.36%, 500=0.10%, 750=0.01% 00:12:44.129 cpu : usr=3.20%, sys=6.00%, ctx=7368, majf=0, minf=13 00:12:44.129 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:44.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.129 issued rwts: total=4804,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:44.129 job1: (groupid=0, jobs=1): err= 0: pid=83644: Tue Jul 23 05:03:44 2024 00:12:44.129 read: IOPS=4513, BW=17.6MiB/s (18.5MB/s)(17.6MiB/1000msec) 00:12:44.129 slat (nsec): min=3116, max=56567, avg=5978.70, stdev=3222.63 00:12:44.129 clat (usec): min=63, max=743, avg=127.74, stdev=27.64 00:12:44.129 lat (usec): min=71, max=751, avg=133.71, stdev=28.54 00:12:44.129 clat percentiles (usec): 00:12:44.129 | 1.00th=[ 82], 5.00th=[ 95], 10.00th=[ 98], 20.00th=[ 104], 00:12:44.129 | 30.00th=[ 113], 40.00th=[ 120], 50.00th=[ 127], 60.00th=[ 133], 00:12:44.129 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 159], 95.00th=[ 172], 00:12:44.129 | 99.00th=[ 206], 99.50th=[ 233], 99.90th=[ 289], 99.95th=[ 351], 00:12:44.129 | 99.99th=[ 742] 00:12:44.129 bw ( KiB/s): min= 9280, max= 9280, per=24.93%, avg=9280.00, stdev= 0.00, samples=1 00:12:44.129 iops : min= 2320, max= 2320, avg=2320.00, stdev= 0.00, samples=1 00:12:44.129 write: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec); 0 zone resets 00:12:44.129 slat (nsec): min=4089, max=50805, avg=7980.59, stdev=3953.77 00:12:44.129 clat (usec): min=67, max=308, avg=143.51, stdev=32.45 00:12:44.129 lat (usec): min=74, max=325, avg=151.49, stdev=32.96 00:12:44.129 clat percentiles (usec): 00:12:44.129 | 1.00th=[ 77], 5.00th=[ 99], 10.00th=[ 105], 20.00th=[ 119], 00:12:44.129 | 30.00th=[ 127], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 147], 00:12:44.129 | 70.00th=[ 155], 80.00th=[ 167], 90.00th=[ 188], 95.00th=[ 204], 00:12:44.129 | 99.00th=[ 239], 99.50th=[ 247], 99.90th=[ 265], 99.95th=[ 289], 00:12:44.129 | 99.99th=[ 310] 00:12:44.129 bw ( KiB/s): min= 9992, max= 9992, per=48.84%, avg=9992.00, stdev= 0.00, samples=1 00:12:44.129 iops : min= 2498, max= 2498, avg=2498.00, stdev= 0.00, samples=1 00:12:44.129 lat (usec) : 100=11.59%, 250=88.10%, 500=0.30%, 750=0.01% 00:12:44.129 cpu : usr=2.50%, sys=5.80%, ctx=7073, majf=0, minf=7 00:12:44.129 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:44.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.129 issued rwts: total=4513,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:44.129 00:12:44.129 Run status group 0 (all jobs): 00:12:44.129 READ: bw=36.4MiB/s (38.1MB/s), 17.6MiB/s-18.7MiB/s (18.5MB/s-19.7MB/s), io=36.4MiB (38.2MB), run=1000-1001msec 00:12:44.129 WRITE: bw=20.0MiB/s (20.9MB/s), 9.99MiB/s-10.0MiB/s (10.5MB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1000-1001msec 00:12:44.129 00:12:44.129 Disk stats (read/write): 00:12:44.129 sda: ios=4301/2309, merge=0/0, ticks=545/295, in_queue=840, util=90.59% 00:12:44.129 sdb: ios=4163/2189, merge=0/0, ticks=531/310, in_queue=842, util=90.96% 00:12:44.129 05:03:44 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:12:44.129 [global] 00:12:44.129 thread=1 00:12:44.129 invalidate=1 00:12:44.129 rw=randrw 00:12:44.129 time_based=1 00:12:44.129 runtime=1 00:12:44.129 ioengine=libaio 00:12:44.129 direct=1 00:12:44.129 bs=131072 00:12:44.129 iodepth=32 00:12:44.129 norandommap=0 00:12:44.129 numjobs=1 00:12:44.129 00:12:44.129 verify_dump=1 00:12:44.129 verify_backlog=512 00:12:44.129 verify_state_save=0 00:12:44.129 do_verify=1 00:12:44.129 verify=crc32c-intel 00:12:44.129 [job0] 00:12:44.129 filename=/dev/sda 00:12:44.129 [job1] 00:12:44.129 filename=/dev/sdb 00:12:44.129 queue_depth set to 113 (sda) 00:12:44.129 queue_depth set to 113 (sdb) 00:12:44.129 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:12:44.129 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:12:44.129 fio-3.35 00:12:44.129 Starting 2 threads 00:12:44.129 [2024-07-23 05:03:44.262391] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:44.129 [2024-07-23 05:03:44.265256] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:45.531 [2024-07-23 05:03:45.392833] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:45.531 [2024-07-23 05:03:45.396246] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:45.531 00:12:45.531 job0: (groupid=0, jobs=1): err= 0: pid=83709: Tue Jul 23 05:03:45 2024 00:12:45.531 read: IOPS=981, BW=123MiB/s (129MB/s)(124MiB/1012msec) 00:12:45.531 slat (usec): min=5, max=100, avg=21.99, stdev=11.56 00:12:45.531 clat (usec): min=1241, max=35915, avg=10995.25, stdev=7096.70 00:12:45.531 lat (usec): min=1262, max=35924, avg=11017.24, stdev=7095.92 00:12:45.531 clat percentiles (usec): 00:12:45.531 | 1.00th=[ 1401], 5.00th=[ 1598], 10.00th=[ 1795], 20.00th=[ 3621], 00:12:45.531 | 30.00th=[ 5866], 40.00th=[ 9372], 50.00th=[11600], 60.00th=[13042], 00:12:45.531 | 70.00th=[14091], 80.00th=[15795], 90.00th=[19530], 95.00th=[23200], 00:12:45.531 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:12:45.531 | 99.99th=[35914] 00:12:45.531 bw ( KiB/s): min=122624, max=129024, per=50.50%, avg=125824.00, stdev=4525.48, samples=2 00:12:45.531 iops : min= 958, max= 1008, avg=983.00, stdev=35.36, samples=2 00:12:45.531 write: IOPS=1040, BW=130MiB/s (136MB/s)(132MiB/1012msec); 0 zone resets 00:12:45.531 slat (usec): min=29, max=217, avg=78.80, stdev=23.31 00:12:45.531 clat (usec): min=3324, max=40049, avg=20220.71, stdev=5081.13 00:12:45.531 lat (usec): min=3397, max=40115, avg=20299.51, stdev=5082.46 00:12:45.531 clat percentiles (usec): 00:12:45.531 | 1.00th=[12649], 5.00th=[13829], 10.00th=[14484], 20.00th=[15533], 00:12:45.531 | 30.00th=[16712], 40.00th=[18482], 50.00th=[20055], 60.00th=[21103], 00:12:45.531 | 70.00th=[22414], 80.00th=[23725], 90.00th=[26346], 95.00th=[30540], 00:12:45.531 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36963], 99.95th=[40109], 00:12:45.531 | 99.99th=[40109] 00:12:45.531 bw ( KiB/s): min=127488, max=136704, per=50.28%, avg=132096.00, stdev=6516.70, samples=2 00:12:45.531 iops : min= 996, max= 1068, avg=1032.00, stdev=50.91, samples=2 00:12:45.531 lat (msec) : 2=5.87%, 4=4.79%, 10=9.82%, 20=49.46%, 50=30.06% 00:12:45.531 cpu : usr=8.21%, sys=4.25%, ctx=1657, majf=0, minf=19 00:12:45.531 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0% 00:12:45.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:12:45.531 issued rwts: total=993,1053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.531 latency : target=0, window=0, percentile=100.00%, depth=32 00:12:45.531 job1: (groupid=0, jobs=1): err= 0: pid=83710: Tue Jul 23 05:03:45 2024 00:12:45.531 read: IOPS=969, BW=121MiB/s (127MB/s)(122MiB/1008msec) 00:12:45.531 slat (nsec): min=5317, max=77442, avg=19034.21, stdev=9257.54 00:12:45.531 clat (usec): min=1206, max=38487, avg=11417.55, stdev=7274.81 00:12:45.531 lat (usec): min=1228, max=38499, avg=11436.58, stdev=7274.35 00:12:45.531 clat percentiles (usec): 00:12:45.531 | 1.00th=[ 1385], 5.00th=[ 1598], 10.00th=[ 1958], 20.00th=[ 4047], 00:12:45.531 | 30.00th=[ 6652], 40.00th=[ 9765], 50.00th=[11469], 60.00th=[13173], 00:12:45.531 | 70.00th=[14353], 80.00th=[15926], 90.00th=[20055], 95.00th=[25297], 00:12:45.531 | 99.00th=[33817], 99.50th=[34866], 99.90th=[38536], 99.95th=[38536], 00:12:45.531 | 99.99th=[38536] 00:12:45.531 bw ( KiB/s): min=114944, max=125440, per=48.24%, avg=120192.00, stdev=7421.79, samples=2 00:12:45.531 iops : min= 898, max= 980, avg=939.00, stdev=57.98, samples=2 00:12:45.531 write: IOPS=1015, BW=127MiB/s (133MB/s)(128MiB/1008msec); 0 zone resets 00:12:45.531 slat (usec): min=29, max=205, avg=70.47, stdev=22.06 00:12:45.531 clat (usec): min=2813, max=38691, avg=20436.14, stdev=5202.23 00:12:45.531 lat (usec): min=2905, max=38750, avg=20506.61, stdev=5203.77 00:12:45.531 clat percentiles (usec): 00:12:45.531 | 1.00th=[11600], 5.00th=[13829], 10.00th=[14615], 20.00th=[15795], 00:12:45.531 | 30.00th=[16909], 40.00th=[18482], 50.00th=[20317], 60.00th=[21627], 00:12:45.531 | 70.00th=[22676], 80.00th=[23987], 90.00th=[26608], 95.00th=[30016], 00:12:45.531 | 99.00th=[36439], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:12:45.531 | 99.99th=[38536] 00:12:45.531 bw ( KiB/s): min=123648, max=138496, per=49.89%, avg=131072.00, stdev=10499.12, samples=2 00:12:45.531 iops : min= 966, max= 1082, avg=1024.00, stdev=82.02, samples=2 00:12:45.531 lat (msec) : 2=5.30%, 4=4.40%, 10=10.44%, 20=48.18%, 50=31.68% 00:12:45.531 cpu : usr=6.45%, sys=4.57%, ctx=1586, majf=0, minf=17 00:12:45.531 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0% 00:12:45.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.531 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:12:45.531 issued rwts: total=977,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.531 latency : target=0, window=0, percentile=100.00%, depth=32 00:12:45.531 00:12:45.531 Run status group 0 (all jobs): 00:12:45.531 READ: bw=243MiB/s (255MB/s), 121MiB/s-123MiB/s (127MB/s-129MB/s), io=246MiB (258MB), run=1008-1012msec 00:12:45.531 WRITE: bw=257MiB/s (269MB/s), 127MiB/s-130MiB/s (133MB/s-136MB/s), io=260MiB (272MB), run=1008-1012msec 00:12:45.531 00:12:45.531 Disk stats (read/write): 00:12:45.531 sda: ios=942/927, merge=0/0, ticks=8984/18342, in_queue=27326, util=90.08% 00:12:45.531 sdb: ios=900/927, merge=0/0, ticks=8670/18423, in_queue=27094, util=90.14% 00:12:45.531 05:03:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@101 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 524288 -d 128 -t randrw -r 1 -v 00:12:45.531 [global] 00:12:45.531 thread=1 00:12:45.531 invalidate=1 00:12:45.531 rw=randrw 00:12:45.531 time_based=1 00:12:45.531 runtime=1 00:12:45.531 ioengine=libaio 00:12:45.531 direct=1 00:12:45.531 bs=524288 00:12:45.531 iodepth=128 00:12:45.531 norandommap=0 00:12:45.531 numjobs=1 00:12:45.531 00:12:45.531 verify_dump=1 00:12:45.531 verify_backlog=512 00:12:45.531 verify_state_save=0 00:12:45.531 do_verify=1 00:12:45.531 verify=crc32c-intel 00:12:45.531 [job0] 00:12:45.531 filename=/dev/sda 00:12:45.531 [job1] 00:12:45.531 filename=/dev/sdb 00:12:45.531 queue_depth set to 113 (sda) 00:12:45.531 queue_depth set to 113 (sdb) 00:12:45.531 job0: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:12:45.531 job1: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:12:45.531 fio-3.35 00:12:45.531 Starting 2 threads 00:12:45.531 [2024-07-23 05:03:45.607789] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:45.531 [2024-07-23 05:03:45.611519] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:46.907 [2024-07-23 05:03:46.868074] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:46.907 [2024-07-23 05:03:46.871650] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:46.907 00:12:46.907 job0: (groupid=0, jobs=1): err= 0: pid=83778: Tue Jul 23 05:03:46 2024 00:12:46.907 read: IOPS=228, BW=114MiB/s (120MB/s)(121MiB/1058msec) 00:12:46.907 slat (usec): min=19, max=18190, avg=1661.19, stdev=3050.31 00:12:46.907 clat (msec): min=94, max=342, avg=234.06, stdev=51.60 00:12:46.907 lat (msec): min=94, max=342, avg=235.72, stdev=51.77 00:12:46.907 clat percentiles (msec): 00:12:46.907 | 1.00th=[ 99], 5.00th=[ 133], 10.00th=[ 180], 20.00th=[ 201], 00:12:46.907 | 30.00th=[ 209], 40.00th=[ 218], 50.00th=[ 228], 60.00th=[ 247], 00:12:46.907 | 70.00th=[ 266], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 313], 00:12:46.907 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 342], 99.95th=[ 342], 00:12:46.907 | 99.99th=[ 342] 00:12:46.907 bw ( KiB/s): min=57344, max=140288, per=38.18%, avg=98816.00, stdev=58650.26, samples=2 00:12:46.907 iops : min= 112, max= 274, avg=193.00, stdev=114.55, samples=2 00:12:46.907 write: IOPS=255, BW=128MiB/s (134MB/s)(135MiB/1058msec); 0 zone resets 00:12:46.907 slat (usec): min=150, max=21480, avg=2090.37, stdev=3653.81 00:12:46.907 clat (msec): min=87, max=379, avg=252.75, stdev=58.27 00:12:46.907 lat (msec): min=94, max=380, avg=254.84, stdev=58.46 00:12:46.907 clat percentiles (msec): 00:12:46.907 | 1.00th=[ 100], 5.00th=[ 124], 10.00th=[ 165], 20.00th=[ 222], 00:12:46.907 | 30.00th=[ 232], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 271], 00:12:46.907 | 70.00th=[ 279], 80.00th=[ 305], 90.00th=[ 326], 95.00th=[ 338], 00:12:46.907 | 99.00th=[ 363], 99.50th=[ 372], 99.90th=[ 380], 99.95th=[ 380], 00:12:46.907 | 99.99th=[ 380] 00:12:46.907 bw ( KiB/s): min=58368, max=138240, per=33.81%, avg=98304.00, stdev=56478.03, samples=2 00:12:46.907 iops : min= 114, max= 270, avg=192.00, stdev=110.31, samples=2 00:12:46.907 lat (msec) : 100=1.76%, 250=53.71%, 500=44.53% 00:12:46.907 cpu : usr=6.34%, sys=1.70%, ctx=287, majf=0, minf=9 00:12:46.907 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.7% 00:12:46.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.907 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:12:46.907 issued rwts: total=242,270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:46.907 job1: (groupid=0, jobs=1): err= 0: pid=83780: Tue Jul 23 05:03:46 2024 00:12:46.907 read: IOPS=283, BW=142MiB/s (149MB/s)(155MiB/1090msec) 00:12:46.907 slat (usec): min=19, max=13296, avg=1472.62, stdev=2785.36 00:12:46.907 clat (msec): min=91, max=362, avg=196.23, stdev=52.62 00:12:46.907 lat (msec): min=91, max=371, avg=197.70, stdev=53.10 00:12:46.907 clat percentiles (msec): 00:12:46.907 | 1.00th=[ 95], 5.00th=[ 112], 10.00th=[ 153], 20.00th=[ 163], 00:12:46.907 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 190], 00:12:46.907 | 70.00th=[ 199], 80.00th=[ 236], 90.00th=[ 271], 95.00th=[ 300], 00:12:46.907 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:12:46.907 | 99.99th=[ 363] 00:12:46.907 bw ( KiB/s): min=122880, max=136192, per=50.05%, avg=129536.00, stdev=9413.01, samples=2 00:12:46.907 iops : min= 240, max= 266, avg=253.00, stdev=18.38, samples=2 00:12:46.907 write: IOPS=320, BW=160MiB/s (168MB/s)(175MiB/1090msec); 0 zone resets 00:12:46.907 slat (usec): min=148, max=22331, avg=1563.13, stdev=3045.83 00:12:46.907 clat (msec): min=85, max=383, avg=212.29, stdev=51.31 00:12:46.907 lat (msec): min=90, max=383, avg=213.85, stdev=51.38 00:12:46.907 clat percentiles (msec): 00:12:46.907 | 1.00th=[ 92], 5.00th=[ 116], 10.00th=[ 150], 20.00th=[ 184], 00:12:46.907 | 30.00th=[ 197], 40.00th=[ 203], 50.00th=[ 209], 60.00th=[ 220], 00:12:46.907 | 70.00th=[ 228], 80.00th=[ 243], 90.00th=[ 275], 95.00th=[ 305], 00:12:46.907 | 99.00th=[ 372], 99.50th=[ 380], 99.90th=[ 384], 99.95th=[ 384], 00:12:46.907 | 99.99th=[ 384] 00:12:46.907 bw ( KiB/s): min=135168, max=148480, per=48.78%, avg=141824.00, stdev=9413.01, samples=2 00:12:46.907 iops : min= 264, max= 290, avg=277.00, stdev=18.38, samples=2 00:12:46.907 lat (msec) : 100=1.82%, 250=81.46%, 500=16.72% 00:12:46.907 cpu : usr=7.62%, sys=2.39%, ctx=535, majf=0, minf=5 00:12:46.907 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:12:46.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.907 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:12:46.907 issued rwts: total=309,349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:46.907 00:12:46.907 Run status group 0 (all jobs): 00:12:46.907 READ: bw=253MiB/s (265MB/s), 114MiB/s-142MiB/s (120MB/s-149MB/s), io=276MiB (289MB), run=1058-1090msec 00:12:46.907 WRITE: bw=284MiB/s (298MB/s), 128MiB/s-160MiB/s (134MB/s-168MB/s), io=310MiB (325MB), run=1058-1090msec 00:12:46.907 00:12:46.907 Disk stats (read/write): 00:12:46.907 sda: ios=233/186, merge=0/0, ticks=18810/24054, in_queue=42865, util=75.94% 00:12:46.907 sdb: ios=357/339, merge=0/0, ticks=22794/33099, in_queue=55893, util=80.84% 00:12:46.907 05:03:46 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 1024 -t read -r 1 -n 4 00:12:46.907 [global] 00:12:46.907 thread=1 00:12:46.907 invalidate=1 00:12:46.907 rw=read 00:12:46.907 time_based=1 00:12:46.907 runtime=1 00:12:46.907 ioengine=libaio 00:12:46.907 direct=1 00:12:46.907 bs=1048576 00:12:46.907 iodepth=1024 00:12:46.907 norandommap=1 00:12:46.907 numjobs=4 00:12:46.907 00:12:46.907 [job0] 00:12:46.907 filename=/dev/sda 00:12:46.907 [job1] 00:12:46.907 filename=/dev/sdb 00:12:46.907 queue_depth set to 113 (sda) 00:12:46.907 queue_depth set to 113 (sdb) 00:12:46.907 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:12:46.907 ... 00:12:46.907 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:12:46.907 ... 00:12:46.907 fio-3.35 00:12:46.907 Starting 8 threads 00:13:01.779 00:13:01.779 job0: (groupid=0, jobs=1): err= 0: pid=83841: Tue Jul 23 05:04:01 2024 00:13:01.779 read: IOPS=1, BW=1770KiB/s (1812kB/s)(25.0MiB/14464msec) 00:13:01.779 slat (usec): min=486, max=3287.8k, avg=132647.70, stdev=657328.34 00:13:01.779 clat (msec): min=11147, max=14462, avg=14318.20, stdev=660.68 00:13:01.779 lat (msec): min=14435, max=14463, avg=14450.85, stdev= 9.13 00:13:01.779 clat percentiles (msec): 00:13:01.779 | 1.00th=[11208], 5.00th=[14429], 10.00th=[14429], 20.00th=[14429], 00:13:01.779 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:13:01.779 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:13:01.779 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:13:01.779 | 99.99th=[14429] 00:13:01.779 lat (msec) : >=2000=100.00% 00:13:01.779 cpu : usr=0.01%, sys=0.09%, ctx=28, majf=0, minf=6401 00:13:01.779 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:13:01.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:13:01.779 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.779 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:01.779 job0: (groupid=0, jobs=1): err= 0: pid=83842: Tue Jul 23 05:04:01 2024 00:13:01.779 read: IOPS=0, BW=284KiB/s (290kB/s)(4096KiB/14444msec) 00:13:01.779 slat (usec): min=776, max=3287.8k, avg=822756.35, stdev=1643337.93 00:13:01.779 clat (msec): min=11152, max=14441, avg=13618.80, stdev=1644.28 00:13:01.779 lat (msec): min=14440, max=14443, avg=14441.56, stdev= 1.40 00:13:01.779 clat percentiles (msec): 00:13:01.779 | 1.00th=[11208], 5.00th=[11208], 10.00th=[11208], 20.00th=[11208], 00:13:01.779 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:13:01.779 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:13:01.779 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:13:01.779 | 99.99th=[14429] 00:13:01.779 lat (msec) : >=2000=100.00% 00:13:01.779 cpu : usr=0.00%, sys=0.01%, ctx=8, majf=0, minf=1025 00:13:01.779 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:01.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.779 issued rwts: total=4,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.779 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:01.779 job0: (groupid=0, jobs=1): err= 0: pid=83843: Tue Jul 23 05:04:01 2024 00:13:01.779 read: IOPS=0, BW=779KiB/s (798kB/s)(11.0MiB/14456msec) 00:13:01.779 slat (usec): min=621, max=3287.3k, avg=299946.57, stdev=990792.20 00:13:01.779 clat (msec): min=11155, max=14454, avg=14149.36, stdev=992.83 00:13:01.779 lat (msec): min=14443, max=14455, avg=14449.30, stdev= 4.65 00:13:01.779 clat percentiles (msec): 00:13:01.779 | 1.00th=[11208], 5.00th=[11208], 10.00th=[14429], 20.00th=[14429], 00:13:01.779 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:13:01.779 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:13:01.779 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:13:01.779 | 99.99th=[14429] 00:13:01.779 lat (msec) : >=2000=100.00% 00:13:01.779 cpu : usr=0.00%, sys=0.06%, ctx=16, majf=0, minf=2817 00:13:01.779 IO depths : 1=9.1%, 2=18.2%, 4=36.4%, 8=36.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:01.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.779 issued rwts: total=11,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.779 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:01.779 job0: (groupid=0, jobs=1): err= 0: pid=83844: Tue Jul 23 05:04:01 2024 00:13:01.779 read: IOPS=2, BW=2687KiB/s (2751kB/s)(38.0MiB/14484msec) 00:13:01.779 slat (usec): min=454, max=3285.2k, avg=87351.44, stdev=532772.80 00:13:01.779 clat (msec): min=11164, max=14482, avg=14379.18, stdev=535.74 00:13:01.779 lat (msec): min=14449, max=14483, avg=14466.53, stdev=10.60 00:13:01.779 clat percentiles (msec): 00:13:01.779 | 1.00th=[11208], 5.00th=[14429], 10.00th=[14429], 20.00th=[14429], 00:13:01.779 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:13:01.779 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:13:01.779 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:13:01.779 | 99.99th=[14429] 00:13:01.779 lat (msec) : >=2000=100.00% 00:13:01.779 cpu : usr=0.01%, sys=0.14%, ctx=61, majf=0, minf=9729 00:13:01.779 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:13:01.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:13:01.779 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.779 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:01.779 job1: (groupid=0, jobs=1): err= 0: pid=83845: Tue Jul 23 05:04:01 2024 00:13:01.779 read: IOPS=0, BW=425KiB/s (435kB/s)(6144KiB/14467msec) 00:13:01.779 slat (usec): min=465, max=3282.0k, avg=547418.96, stdev=1339657.73 00:13:01.779 clat (msec): min=11181, max=14465, avg=13917.55, stdev=1340.26 00:13:01.779 lat (usec): min=14464k, max=14466k, avg=14464973.11, stdev=948.28 00:13:01.779 clat percentiles (msec): 00:13:01.779 | 1.00th=[11208], 5.00th=[11208], 10.00th=[11208], 20.00th=[14429], 00:13:01.779 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:13:01.779 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:13:01.779 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:13:01.779 | 99.99th=[14429] 00:13:01.779 lat (msec) : >=2000=100.00% 00:13:01.779 cpu : usr=0.00%, sys=0.01%, ctx=14, majf=0, minf=1537 00:13:01.779 IO depths : 1=16.7%, 2=33.3%, 4=50.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:01.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.779 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.779 issued rwts: total=6,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.779 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:01.779 job1: (groupid=0, jobs=1): err= 0: pid=83846: Tue Jul 23 05:04:01 2024 00:13:01.779 read: IOPS=1, BW=1624KiB/s (1663kB/s)(23.0MiB/14498msec) 00:13:01.779 slat (usec): min=407, max=8212.3k, avg=358017.04, stdev=1712178.28 00:13:01.779 clat (msec): min=6263, max=14496, avg=14127.59, stdev=1714.33 00:13:01.779 lat (msec): min=14475, max=14497, avg=14485.60, stdev= 6.63 00:13:01.779 clat percentiles (msec): 00:13:01.779 | 1.00th=[ 6275], 5.00th=[14429], 10.00th=[14429], 20.00th=[14429], 00:13:01.779 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:13:01.779 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:13:01.779 | 99.00th=[14563], 99.50th=[14563], 99.90th=[14563], 99.95th=[14563], 00:13:01.779 | 99.99th=[14563] 00:13:01.779 lat (msec) : >=2000=100.00% 00:13:01.779 cpu : usr=0.00%, sys=0.08%, ctx=40, majf=0, minf=5889 00:13:01.779 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:13:01.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:13:01.779 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.779 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:01.779 job1: (groupid=0, jobs=1): err= 0: pid=83847: Tue Jul 23 05:04:01 2024 00:13:01.779 read: IOPS=0, BW=777KiB/s (796kB/s)(11.0MiB/14489msec) 00:13:01.779 slat (usec): min=690, max=3282.2k, avg=299668.47, stdev=989208.43 00:13:01.779 clat (msec): min=11192, max=14484, avg=14181.39, stdev=991.40 00:13:01.779 lat (msec): min=14474, max=14488, avg=14481.06, stdev= 4.41 00:13:01.779 clat percentiles (msec): 00:13:01.779 | 1.00th=[11208], 5.00th=[11208], 10.00th=[14429], 20.00th=[14429], 00:13:01.779 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:13:01.779 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14429], 95.00th=[14429], 00:13:01.780 | 99.00th=[14429], 99.50th=[14429], 99.90th=[14429], 99.95th=[14429], 00:13:01.780 | 99.99th=[14429] 00:13:01.780 lat (msec) : >=2000=100.00% 00:13:01.780 cpu : usr=0.00%, sys=0.06%, ctx=24, majf=0, minf=2817 00:13:01.780 IO depths : 1=9.1%, 2=18.2%, 4=36.4%, 8=36.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:01.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.780 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.780 issued rwts: total=11,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.780 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:01.780 job1: (groupid=0, jobs=1): err= 0: pid=83848: Tue Jul 23 05:04:01 2024 00:13:01.780 read: IOPS=1, BW=1553KiB/s (1591kB/s)(22.0MiB/14503msec) 00:13:01.780 slat (usec): min=513, max=3282.0k, avg=150530.68, stdev=699429.74 00:13:01.780 clat (msec): min=11191, max=14501, avg=14336.68, stdev=702.64 00:13:01.780 lat (msec): min=14473, max=14502, avg=14487.21, stdev= 9.93 00:13:01.780 clat percentiles (msec): 00:13:01.780 | 1.00th=[11208], 5.00th=[14429], 10.00th=[14429], 20.00th=[14429], 00:13:01.780 | 30.00th=[14429], 40.00th=[14429], 50.00th=[14429], 60.00th=[14429], 00:13:01.780 | 70.00th=[14429], 80.00th=[14429], 90.00th=[14563], 95.00th=[14563], 00:13:01.780 | 99.00th=[14563], 99.50th=[14563], 99.90th=[14563], 99.95th=[14563], 00:13:01.780 | 99.99th=[14563] 00:13:01.780 lat (msec) : >=2000=100.00% 00:13:01.780 cpu : usr=0.00%, sys=0.10%, ctx=37, majf=0, minf=5633 00:13:01.780 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:13:01.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.780 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:13:01.780 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.780 latency : target=0, window=0, percentile=100.00%, depth=1024 00:13:01.780 00:13:01.780 Run status group 0 (all jobs): 00:13:01.780 READ: bw=9885KiB/s (10.1MB/s), 284KiB/s-2687KiB/s (290kB/s-2751kB/s), io=140MiB (147MB), run=14444-14503msec 00:13:01.780 00:13:01.780 Disk stats (read/write): 00:13:01.780 sda: ios=50/0, merge=0/0, ticks=250414/0, in_queue=250414, util=99.40% 00:13:01.780 sdb: ios=20/0, merge=0/0, ticks=185309/0, in_queue=185309, util=98.04% 00:13:01.780 05:04:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@104 -- # '[' 1 -eq 1 ']' 00:13:01.780 05:04:01 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t write -r 300 -v 00:13:01.780 [global] 00:13:01.780 thread=1 00:13:01.780 invalidate=1 00:13:01.780 rw=write 00:13:01.780 time_based=1 00:13:01.780 runtime=300 00:13:01.780 ioengine=libaio 00:13:01.780 direct=1 00:13:01.780 bs=4096 00:13:01.780 iodepth=1 00:13:01.780 norandommap=0 00:13:01.780 numjobs=1 00:13:01.780 00:13:01.780 verify_dump=1 00:13:01.780 verify_backlog=512 00:13:01.780 verify_state_save=0 00:13:01.780 do_verify=1 00:13:01.780 verify=crc32c-intel 00:13:01.780 [job0] 00:13:01.780 filename=/dev/sda 00:13:01.780 [job1] 00:13:01.780 filename=/dev/sdb 00:13:01.780 queue_depth set to 113 (sda) 00:13:01.780 queue_depth set to 113 (sdb) 00:13:01.780 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:01.780 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:01.780 fio-3.35 00:13:01.780 Starting 2 threads 00:13:01.780 [2024-07-23 05:04:01.996139] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:02.038 [2024-07-23 05:04:02.000297] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:12.005 [2024-07-23 05:04:11.798224] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:21.989 [2024-07-23 05:04:21.529780] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.954 [2024-07-23 05:04:30.567571] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:40.057 [2024-07-23 05:04:39.794581] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:50.023 [2024-07-23 05:04:48.728533] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:58.174 [2024-07-23 05:04:57.630335] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:08.147 [2024-07-23 05:05:06.652272] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:16.255 [2024-07-23 05:05:15.598803] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:16.255 [2024-07-23 05:05:15.788482] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:26.220 [2024-07-23 05:05:24.692162] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:34.335 [2024-07-23 05:05:33.523279] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:42.440 [2024-07-23 05:05:42.370033] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:52.409 [2024-07-23 05:05:51.181931] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:00.548 [2024-07-23 05:06:00.069850] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:10.574 [2024-07-23 05:06:09.146820] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:18.687 [2024-07-23 05:06:18.285193] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:28.687 [2024-07-23 05:06:27.204652] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:28.687 [2024-07-23 05:06:27.677104] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:36.795 [2024-07-23 05:06:36.121256] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:46.834 [2024-07-23 05:06:45.267018] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:54.942 [2024-07-23 05:06:54.257419] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:04.910 [2024-07-23 05:07:03.317267] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:13.045 [2024-07-23 05:07:12.597135] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:23.046 [2024-07-23 05:07:21.849555] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:33.055 [2024-07-23 05:07:31.426057] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:41.165 [2024-07-23 05:07:41.054343] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:43.695 [2024-07-23 05:07:43.589805] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:51.859 [2024-07-23 05:07:50.775410] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:01.869 [2024-07-23 05:08:00.442383] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:09.994 [2024-07-23 05:08:09.466674] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:19.965 [2024-07-23 05:08:18.481303] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:28.103 [2024-07-23 05:08:27.362569] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:38.100 [2024-07-23 05:08:36.521147] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:46.248 [2024-07-23 05:08:46.007221] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:56.232 [2024-07-23 05:08:55.477272] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:57.635 [2024-07-23 05:08:57.822944] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:02.917 [2024-07-23 05:09:02.109713] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:02.917 [2024-07-23 05:09:02.113515] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:02.917 00:18:02.917 job0: (groupid=0, jobs=1): err= 0: pid=84036: Tue Jul 23 05:09:02 2024 00:18:02.917 read: IOPS=3571, BW=13.9MiB/s (14.6MB/s)(4185MiB/299997msec) 00:18:02.917 slat (usec): min=3, max=254, avg= 6.60, stdev= 2.85 00:18:02.917 clat (usec): min=2, max=3551, avg=132.43, stdev=20.77 00:18:02.917 lat (usec): min=76, max=3558, avg=139.03, stdev=20.73 00:18:02.917 clat percentiles (usec): 00:18:02.917 | 1.00th=[ 85], 5.00th=[ 103], 10.00th=[ 109], 20.00th=[ 119], 00:18:02.917 | 30.00th=[ 124], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:18:02.917 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 163], 00:18:02.917 | 99.00th=[ 188], 99.50th=[ 200], 99.90th=[ 245], 99.95th=[ 277], 00:18:02.917 | 99.99th=[ 367] 00:18:02.917 write: IOPS=3572, BW=14.0MiB/s (14.6MB/s)(4186MiB/299997msec); 0 zone resets 00:18:02.917 slat (usec): min=3, max=295, avg= 9.30, stdev= 3.26 00:18:02.917 clat (nsec): min=1385, max=3590.2k, avg=128527.07, stdev=30796.32 00:18:02.917 lat (usec): min=76, max=3596, avg=137.83, stdev=30.89 00:18:02.917 clat percentiles (usec): 00:18:02.917 | 1.00th=[ 75], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 105], 00:18:02.917 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 131], 60.00th=[ 139], 00:18:02.917 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 169], 00:18:02.917 | 99.00th=[ 196], 99.50th=[ 210], 99.90th=[ 258], 99.95th=[ 297], 00:18:02.917 | 99.99th=[ 586] 00:18:02.917 bw ( KiB/s): min= 9280, max=16416, per=50.22%, avg=14297.88, stdev=1278.39, samples=599 00:18:02.917 iops : min= 2320, max= 4104, avg=3574.43, stdev=319.60, samples=599 00:18:02.918 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:18:02.918 lat (usec) : 100=11.45%, 250=88.44%, 500=0.09%, 750=0.01%, 1000=0.01% 00:18:02.918 lat (msec) : 2=0.01%, 4=0.01% 00:18:02.918 cpu : usr=3.02%, sys=5.91%, ctx=2146336, majf=0, minf=2 00:18:02.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:02.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.918 issued rwts: total=1071299,1071616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:02.918 job1: (groupid=0, jobs=1): err= 0: pid=84038: Tue Jul 23 05:09:02 2024 00:18:02.918 read: IOPS=3543, BW=13.8MiB/s (14.5MB/s)(4153MiB/300000msec) 00:18:02.918 slat (usec): min=2, max=924, avg= 6.19, stdev= 3.06 00:18:02.918 clat (usec): min=2, max=3458, avg=129.26, stdev=22.67 00:18:02.918 lat (usec): min=63, max=3461, avg=135.45, stdev=23.10 00:18:02.918 clat percentiles (usec): 00:18:02.918 | 1.00th=[ 93], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 114], 00:18:02.918 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 133], 00:18:02.918 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 151], 95.00th=[ 163], 00:18:02.918 | 99.00th=[ 200], 99.50th=[ 215], 99.90th=[ 269], 99.95th=[ 310], 00:18:02.918 | 99.99th=[ 594] 00:18:02.918 write: IOPS=3544, BW=13.8MiB/s (14.5MB/s)(4154MiB/300000msec); 0 zone resets 00:18:02.918 slat (usec): min=3, max=412, avg= 9.05, stdev= 3.07 00:18:02.918 clat (nsec): min=1466, max=3601.1k, avg=134686.09, stdev=40665.72 00:18:02.918 lat (usec): min=71, max=3606, avg=143.73, stdev=40.78 00:18:02.918 clat percentiles (usec): 00:18:02.918 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 87], 00:18:02.918 | 30.00th=[ 123], 40.00th=[ 130], 50.00th=[ 137], 60.00th=[ 143], 00:18:02.918 | 70.00th=[ 157], 80.00th=[ 172], 90.00th=[ 184], 95.00th=[ 194], 00:18:02.918 | 99.00th=[ 219], 99.50th=[ 231], 99.90th=[ 273], 99.95th=[ 302], 00:18:02.918 | 99.99th=[ 502] 00:18:02.918 bw ( KiB/s): min= 9520, max=16416, per=49.84%, avg=14187.97, stdev=1272.82, samples=599 00:18:02.918 iops : min= 2380, max= 4104, avg=3546.96, stdev=318.20, samples=599 00:18:02.918 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:18:02.918 lat (usec) : 100=12.16%, 250=87.65%, 500=0.18%, 750=0.01%, 1000=0.01% 00:18:02.918 lat (msec) : 2=0.01%, 4=0.01% 00:18:02.918 cpu : usr=3.02%, sys=5.79%, ctx=2128220, majf=0, minf=1 00:18:02.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:02.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.918 issued rwts: total=1063044,1063424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:02.918 00:18:02.918 Run status group 0 (all jobs): 00:18:02.918 READ: bw=27.8MiB/s (29.1MB/s), 13.8MiB/s-13.9MiB/s (14.5MB/s-14.6MB/s), io=8337MiB (8742MB), run=299997-300000msec 00:18:02.918 WRITE: bw=27.8MiB/s (29.1MB/s), 13.8MiB/s-14.0MiB/s (14.5MB/s-14.6MB/s), io=8340MiB (8745MB), run=299997-300000msec 00:18:02.918 00:18:02.918 Disk stats (read/write): 00:18:02.918 sda: ios=1072637/1071104, merge=0/0, ticks=138182/137231, in_queue=275413, util=100.00% 00:18:02.918 sdb: ios=1063043/1062912, merge=0/0, ticks=132589/142106, in_queue=274695, util=100.00% 00:18:02.918 05:09:02 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@116 -- # fio_pid=87382 00:18:02.918 05:09:02 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 128 -t rw -r 10 00:18:02.918 05:09:02 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@118 -- # sleep 3 00:18:02.918 [global] 00:18:02.918 thread=1 00:18:02.918 invalidate=1 00:18:02.918 rw=rw 00:18:02.918 time_based=1 00:18:02.918 runtime=10 00:18:02.918 ioengine=libaio 00:18:02.918 direct=1 00:18:02.918 bs=1048576 00:18:02.918 iodepth=128 00:18:02.918 norandommap=1 00:18:02.918 numjobs=1 00:18:02.918 00:18:02.918 [job0] 00:18:02.918 filename=/dev/sda 00:18:02.918 [job1] 00:18:02.918 filename=/dev/sdb 00:18:02.918 queue_depth set to 113 (sda) 00:18:02.918 queue_depth set to 113 (sdb) 00:18:02.918 job0: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:02.918 job1: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:02.918 fio-3.35 00:18:02.918 Starting 2 threads 00:18:02.918 [2024-07-23 05:09:02.333548] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:02.918 [2024-07-23 05:09:02.338546] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:05.453 05:09:05 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:05.453 [2024-07-23 05:09:05.416468] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (raid0) received event(SPDK_BDEV_EVENT_REMOVE) 00:18:05.453 [2024-07-23 05:09:05.418646] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.418795] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.418864] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.426259] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.427784] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.429661] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.431544] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.433514] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.433619] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.433692] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.433756] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.433824] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.433886] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.433948] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 05:09:05 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:18:05.453 05:09:05 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:05.453 [2024-07-23 05:09:05.451136] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.451250] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c36 00:18:05.453 [2024-07-23 05:09:05.451334] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.453 [2024-07-23 05:09:05.454016] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.453 [2024-07-23 05:09:05.454130] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.457652] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.459035] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.460474] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.461750] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.463122] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.464823] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.466097] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.467553] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.468896] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.470240] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.471535] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.472871] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.474427] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c37 00:18:05.454 [2024-07-23 05:09:05.475607] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.478211] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.480312] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.482443] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.484152] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.486574] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.487830] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.489371] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.490444] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.491688] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.493049] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.494444] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.495681] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.497245] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.498450] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.454 [2024-07-23 05:09:05.500022] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c38 00:18:05.712 fio: io_u error on file /dev/sda: Input/output error: write offset=28311552, buflen=1048576 00:18:05.712 fio: io_u error on file /dev/sda: Input/output error: write offset=29360128, buflen=1048576 00:18:05.712 05:09:05 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:18:05.712 05:09:05 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: write offset=25165824, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: write offset=26214400, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: write offset=27262976, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: write offset=30408704, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: write offset=31457280, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: read offset=6291456, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: read offset=7340032, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: read offset=8388608, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: read offset=9437184, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: read offset=10485760, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: write offset=32505856, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: write offset=33554432, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: write offset=34603008, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: read offset=11534336, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: read offset=12582912, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: read offset=13631488, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: write offset=35651584, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: read offset=14680064, buflen=1048576 00:18:05.971 fio: io_u error on file /dev/sda: Input/output error: read offset=15728640, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=16777216, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=36700160, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=17825792, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=18874368, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=37748736, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=19922944, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=38797312, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=39845888, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=20971520, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=22020096, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=23068672, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=40894464, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=41943040, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=24117248, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=25165824, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=42991616, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=44040192, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=45088768, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=46137344, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=26214400, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=47185920, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=48234496, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=27262976, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=49283072, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=28311552, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=50331648, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=51380224, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=52428800, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=53477376, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=29360128, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=54525952, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=30408704, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=55574528, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=31457280, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=56623104, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=32505856, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=57671680, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=58720256, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=59768832, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=60817408, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=33554432, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=34603008, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=35651584, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=36700160, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=37748736, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=61865984, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=62914560, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=63963136, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=38797312, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=39845888, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=65011712, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=66060288, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=67108864, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=68157440, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=40894464, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=69206016, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=41943040, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=42991616, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=70254592, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=44040192, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=71303168, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=45088768, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=72351744, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=73400320, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=46137344, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=74448896, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=75497472, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=76546048, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=47185920, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=77594624, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=78643200, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=48234496, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=79691776, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=80740352, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=81788928, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=82837504, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=83886080, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=49283072, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=50331648, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=84934656, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=85983232, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=51380224, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=52428800, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=87031808, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=53477376, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=88080384, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=54525952, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=55574528, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=56623104, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=57671680, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=58720256, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=59768832, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=60817408, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=89128960, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=90177536, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=91226112, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=61865984, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=62914560, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=92274688, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=93323264, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=94371840, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=63963136, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: write offset=95420416, buflen=1048576 00:18:05.972 fio: io_u error on file /dev/sda: Input/output error: read offset=65011712, buflen=1048576 00:18:05.973 fio: io_u error on file /dev/sda: Input/output error: read offset=66060288, buflen=1048576 00:18:05.973 fio: pid=87420, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:05.973 fio: io_u error on file /dev/sda: Input/output error: write offset=96468992, buflen=1048576 00:18:05.973 fio: io_u error on file /dev/sda: Input/output error: read offset=67108864, buflen=1048576 00:18:05.973 05:09:06 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:06.231 [2024-07-23 05:09:06.329798] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Malloc2) received event(SPDK_BDEV_EVENT_REMOVE) 00:18:06.231 [2024-07-23 05:09:06.330178] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d16 00:18:06.231 [2024-07-23 05:09:06.330269] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d16 00:18:06.800 [2024-07-23 05:09:06.741404] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d16 00:18:06.800 [2024-07-23 05:09:06.742690] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d16 00:18:06.800 [2024-07-23 05:09:06.743778] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d16 00:18:06.800 [2024-07-23 05:09:06.745284] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d16 00:18:06.800 [2024-07-23 05:09:06.746440] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d16 00:18:06.800 [2024-07-23 05:09:06.747478] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d16 00:18:06.800 [2024-07-23 05:09:06.748839] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d16 00:18:06.800 [2024-07-23 05:09:06.749888] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d16 00:18:06.800 [2024-07-23 05:09:06.751201] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d16 00:18:06.800 [2024-07-23 05:09:06.752272] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d16 00:18:06.800 [2024-07-23 05:09:06.753578] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.754600] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.755844] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.756811] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.758016] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.759041] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.760244] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.761403] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 05:09:06 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@131 -- # fio_status=0 00:18:06.800 05:09:06 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # wait 87382 00:18:06.800 [2024-07-23 05:09:06.762399] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.763613] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.764718] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.766005] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.767012] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.768209] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.769253] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.770452] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d17 00:18:06.800 [2024-07-23 05:09:06.771455] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.772656] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.773666] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.774893] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.775875] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.775965] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.776084] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.776171] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.776235] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.776300] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.781580] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.781675] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.785281] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.785379] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.785443] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.789008] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d18 00:18:06.800 [2024-07-23 05:09:06.790294] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.791832] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.793103] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.794546] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.795859] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.797109] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.798554] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.799769] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.801037] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.802533] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.803733] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.804969] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.806581] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.807874] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.809126] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 [2024-07-23 05:09:06.810468] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=d19 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=392167424, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=393216000, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=394264576, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=395313152, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=396361728, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=397410304, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=398458880, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=399507456, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=387973120, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=389021696, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=390070272, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=391118848, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=400556032, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: read offset=357564416, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: read offset=358612992, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=401604608, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: read offset=359661568, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=402653184, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: read offset=360710144, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=403701760, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=404750336, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=405798912, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=406847488, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=407896064, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=408944640, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: read offset=361758720, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: read offset=362807296, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: read offset=363855872, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: read offset=364904448, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: read offset=365953024, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=409993216, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=411041792, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=412090368, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=413138944, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=414187520, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=415236096, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: read offset=367001600, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: read offset=368050176, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=416284672, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=417333248, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=418381824, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=419430400, buflen=1048576 00:18:06.800 fio: io_u error on file /dev/sdb: Input/output error: write offset=420478976, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=369098752, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=421527552, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=370147328, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=422576128, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=371195904, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=423624704, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=424673280, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=425721856, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=372244480, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=373293056, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=374341632, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=375390208, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=426770432, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=427819008, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=428867584, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=429916160, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=376438784, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=430964736, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=432013312, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=377487360, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=378535936, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=433061888, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=379584512, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=434110464, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=380633088, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=381681664, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=435159040, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=436207616, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=382730240, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=437256192, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=383778816, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=384827392, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=385875968, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=438304768, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=386924544, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=439353344, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=440401920, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=387973120, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=441450496, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=442499072, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=443547648, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=444596224, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=389021696, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=445644800, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=446693376, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=447741952, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=448790528, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=390070272, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=391118848, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=392167424, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=449839104, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=450887680, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=451936256, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=393216000, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=452984832, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=454033408, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=455081984, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=456130560, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=394264576, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=457179136, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=458227712, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=459276288, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=460324864, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=395313152, buflen=1048576 00:18:06.801 fio: pid=87423, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=396361728, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=461373440, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=397410304, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=462422016, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=398458880, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=463470592, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=399507456, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=464519168, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=465567744, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=466616320, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=400556032, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=401604608, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=402653184, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=467664896, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=468713472, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=403701760, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=404750336, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=405798912, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: write offset=469762048, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=406847488, buflen=1048576 00:18:06.801 fio: io_u error on file /dev/sdb: Input/output error: read offset=407896064, buflen=1048576 00:18:06.801 00:18:06.801 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87420: Tue Jul 23 05:09:06 2024 00:18:06.801 read: IOPS=92, BW=75.3MiB/s (79.0MB/s)(262MiB/3478msec) 00:18:06.801 slat (usec): min=29, max=626146, avg=4131.00, stdev=35395.56 00:18:06.801 clat (msec): min=119, max=1745, avg=619.17, stdev=386.19 00:18:06.801 lat (msec): min=119, max=1745, avg=623.53, stdev=386.13 00:18:06.801 clat percentiles (msec): 00:18:06.801 | 1.00th=[ 133], 5.00th=[ 169], 10.00th=[ 232], 20.00th=[ 284], 00:18:06.801 | 30.00th=[ 347], 40.00th=[ 384], 50.00th=[ 418], 60.00th=[ 877], 00:18:06.801 | 70.00th=[ 902], 80.00th=[ 1099], 90.00th=[ 1116], 95.00th=[ 1133], 00:18:06.801 | 99.00th=[ 1720], 99.50th=[ 1754], 99.90th=[ 1754], 99.95th=[ 1754], 00:18:06.801 | 99.99th=[ 1754] 00:18:06.801 bw ( KiB/s): min= 2043, max=171676, per=56.65%, avg=82171.00, stdev=68108.48, samples=6 00:18:06.801 iops : min= 1, max= 167, avg=79.67, stdev=66.79, samples=6 00:18:06.801 write: IOPS=100, BW=80.5MiB/s (84.4MB/s)(280MiB/3478msec); 0 zone resets 00:18:06.801 slat (usec): min=39, max=624005, avg=5406.25, stdev=36352.70 00:18:06.801 clat (msec): min=183, max=1813, avg=729.04, stdev=465.21 00:18:06.802 lat (msec): min=183, max=1813, avg=734.56, stdev=469.64 00:18:06.802 clat percentiles (msec): 00:18:06.802 | 1.00th=[ 184], 5.00th=[ 201], 10.00th=[ 334], 20.00th=[ 384], 00:18:06.802 | 30.00th=[ 414], 40.00th=[ 435], 50.00th=[ 460], 60.00th=[ 927], 00:18:06.802 | 70.00th=[ 961], 80.00th=[ 1116], 90.00th=[ 1183], 95.00th=[ 1787], 00:18:06.802 | 99.00th=[ 1804], 99.50th=[ 1804], 99.90th=[ 1821], 99.95th=[ 1821], 00:18:06.802 | 99.99th=[ 1821] 00:18:06.802 bw ( KiB/s): min=10219, max=197932, per=56.88%, avg=88939.17, stdev=76857.86, samples=6 00:18:06.802 iops : min= 9, max= 193, avg=86.33, stdev=75.43, samples=6 00:18:06.802 lat (msec) : 250=10.00%, 500=34.93%, 750=3.43%, 1000=13.58%, 2000=18.96% 00:18:06.802 cpu : usr=1.01%, sys=1.15%, ctx=175, majf=0, minf=1 00:18:06.802 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:18:06.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.802 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:18:06.802 issued rwts: total=321,349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.802 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=87423: Tue Jul 23 05:09:06 2024 00:18:06.802 read: IOPS=91, BW=80.1MiB/s (84.0MB/s)(341MiB/4257msec) 00:18:06.802 slat (usec): min=32, max=624469, avg=6184.68, stdev=40903.23 00:18:06.802 clat (msec): min=174, max=2211, avg=603.24, stdev=611.17 00:18:06.802 lat (msec): min=174, max=2212, avg=608.78, stdev=615.95 00:18:06.802 clat percentiles (msec): 00:18:06.802 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 209], 20.00th=[ 226], 00:18:06.802 | 30.00th=[ 236], 40.00th=[ 251], 50.00th=[ 326], 60.00th=[ 514], 00:18:06.802 | 70.00th=[ 558], 80.00th=[ 625], 90.00th=[ 2056], 95.00th=[ 2123], 00:18:06.802 | 99.00th=[ 2198], 99.50th=[ 2198], 99.90th=[ 2198], 99.95th=[ 2198], 00:18:06.802 | 99.99th=[ 2198] 00:18:06.802 bw ( KiB/s): min= 6131, max=268288, per=87.54%, avg=126973.40, stdev=93519.57, samples=5 00:18:06.802 iops : min= 5, max= 262, avg=123.80, stdev=91.65, samples=5 00:18:06.802 write: IOPS=105, BW=86.9MiB/s (91.1MB/s)(370MiB/4257msec); 0 zone resets 00:18:06.802 slat (usec): min=51, max=626332, avg=4095.36, stdev=30225.30 00:18:06.802 clat (msec): min=223, max=2251, avg=662.93, stdev=603.55 00:18:06.802 lat (msec): min=224, max=2251, avg=667.80, stdev=604.52 00:18:06.802 clat percentiles (msec): 00:18:06.802 | 1.00th=[ 230], 5.00th=[ 247], 10.00th=[ 262], 20.00th=[ 271], 00:18:06.802 | 30.00th=[ 279], 40.00th=[ 300], 50.00th=[ 426], 60.00th=[ 558], 00:18:06.802 | 70.00th=[ 600], 80.00th=[ 676], 90.00th=[ 2039], 95.00th=[ 2165], 00:18:06.802 | 99.00th=[ 2232], 99.50th=[ 2265], 99.90th=[ 2265], 99.95th=[ 2265], 00:18:06.802 | 99.99th=[ 2265] 00:18:06.802 bw ( KiB/s): min=118784, max=260096, per=100.00%, avg=173568.00, stdev=60623.91, samples=4 00:18:06.802 iops : min= 116, max= 254, avg=169.50, stdev=59.20, samples=4 00:18:06.802 lat (msec) : 250=19.43%, 500=29.56%, 750=20.98%, 2000=4.65%, >=2000=10.13% 00:18:06.802 cpu : usr=1.06%, sys=1.27%, ctx=195, majf=0, minf=1 00:18:06.802 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:18:06.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.802 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:06.802 issued rwts: total=390,449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.802 00:18:06.802 Run status group 0 (all jobs): 00:18:06.802 READ: bw=142MiB/s (149MB/s), 75.3MiB/s-80.1MiB/s (79.0MB/s-84.0MB/s), io=603MiB (632MB), run=3478-4257msec 00:18:06.802 WRITE: bw=153MiB/s (160MB/s), 80.5MiB/s-86.9MiB/s (84.4MB/s-91.1MB/s), io=650MiB (682MB), run=3478-4257msec 00:18:06.802 00:18:06.802 Disk stats (read/write): 00:18:06.802 sda: ios=347/324, merge=0/0, ticks=73223/94020, in_queue=167244, util=90.84% 00:18:06.802 sdb: ios=389/379, merge=0/0, ticks=91791/116174, in_queue=207965, util=85.10% 00:18:06.802 iscsi hotplug test: fio failed as expected 00:18:06.802 Cleaning up iSCSI connection 00:18:06.802 05:09:06 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # fio_status=2 00:18:06.802 05:09:06 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@134 -- # '[' 2 -eq 0 ']' 00:18:06.802 05:09:06 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@138 -- # echo 'iscsi hotplug test: fio failed as expected' 00:18:06.802 05:09:06 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@141 -- # iscsicleanup 00:18:06.802 05:09:06 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:18:06.802 05:09:06 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:18:06.802 Logging out of session [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:18:06.802 Logout of [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:18:06.802 05:09:06 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:18:06.802 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@983 -- # rm -rf 00:18:06.802 05:09:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2016-06.io.spdk:Target3 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@144 -- # delete_tmp_files 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@14 -- # rm -f /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/iscsi2.json 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@15 -- # rm -f ./local-job0-0-verify.state 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@16 -- # rm -f ./local-job1-1-verify.state 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@148 -- # killprocess 83498 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@948 -- # '[' -z 83498 ']' 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@952 -- # kill -0 83498 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # uname 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83498 00:18:07.061 killing process with pid 83498 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83498' 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@967 -- # kill 83498 00:18:07.061 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@972 -- # wait 83498 00:18:07.628 05:09:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@150 -- # iscsitestfini 00:18:07.628 05:09:07 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:18:07.628 00:18:07.628 real 5m29.775s 00:18:07.628 user 3m16.151s 00:18:07.628 sys 2m11.645s 00:18:07.628 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:07.628 ************************************ 00:18:07.629 END TEST iscsi_tgt_fio 00:18:07.629 ************************************ 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:18:07.629 05:09:07 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:18:07.629 05:09:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@38 -- # run_test iscsi_tgt_qos /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:18:07.629 05:09:07 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:07.629 05:09:07 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.629 05:09:07 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:18:07.629 ************************************ 00:18:07.629 START TEST iscsi_tgt_qos 00:18:07.629 ************************************ 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:18:07.629 * Looking for test storage... 00:18:07.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@11 -- # iscsitestinit 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@44 -- # '[' -z 10.0.0.1 ']' 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@49 -- # '[' -z 10.0.0.2 ']' 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@54 -- # MALLOC_BDEV_SIZE=64 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@55 -- # MALLOC_BLOCK_SIZE=512 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@56 -- # IOPS_RESULT= 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@57 -- # BANDWIDTH_RESULT= 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@58 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@60 -- # timing_enter start_iscsi_tgt 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:07.629 Process pid: 87572 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@63 -- # pid=87572 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@62 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@64 -- # echo 'Process pid: 87572' 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@65 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@66 -- # waitforlisten 87572 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@829 -- # '[' -z 87572 ']' 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.629 05:09:07 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:07.888 [2024-07-23 05:09:07.902548] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:18:07.888 [2024-07-23 05:09:07.903399] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87572 ] 00:18:07.888 [2024-07-23 05:09:08.043635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.147 [2024-07-23 05:09:08.145175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.715 iscsi_tgt is listening. Running tests... 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@862 -- # return 0 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@67 -- # echo 'iscsi_tgt is listening. Running tests...' 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@69 -- # timing_exit start_iscsi_tgt 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@71 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@72 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.715 05:09:08 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@73 -- # rpc_cmd bdev_malloc_create 64 512 00:18:08.716 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.716 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:08.716 Malloc0 00:18:08.716 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.716 05:09:08 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@78 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias Malloc0:0 1:2 64 -d 00:18:08.716 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.716 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:08.716 05:09:08 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.716 05:09:08 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@79 -- # sleep 1 00:18:10.104 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@81 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:18:10.104 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:18:10.104 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@82 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:18:10.105 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:18:10.105 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@84 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@87 -- # run_fio Malloc0 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:10.105 [2024-07-23 05:09:09.924067] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:10.105 "tick_rate": 2200000000, 00:18:10.105 "ticks": 2175778840913, 00:18:10.105 "bdevs": [ 00:18:10.105 { 00:18:10.105 "name": "Malloc0", 00:18:10.105 "bytes_read": 37376, 00:18:10.105 "num_read_ops": 3, 00:18:10.105 "bytes_written": 0, 00:18:10.105 "num_write_ops": 0, 00:18:10.105 "bytes_unmapped": 0, 00:18:10.105 "num_unmap_ops": 0, 00:18:10.105 "bytes_copied": 0, 00:18:10.105 "num_copy_ops": 0, 00:18:10.105 "read_latency_ticks": 982828, 00:18:10.105 "max_read_latency_ticks": 394130, 00:18:10.105 "min_read_latency_ticks": 277670, 00:18:10.105 "write_latency_ticks": 0, 00:18:10.105 "max_write_latency_ticks": 0, 00:18:10.105 "min_write_latency_ticks": 0, 00:18:10.105 "unmap_latency_ticks": 0, 00:18:10.105 "max_unmap_latency_ticks": 0, 00:18:10.105 "min_unmap_latency_ticks": 0, 00:18:10.105 "copy_latency_ticks": 0, 00:18:10.105 "max_copy_latency_ticks": 0, 00:18:10.105 "min_copy_latency_ticks": 0, 00:18:10.105 "io_error": {} 00:18:10.105 } 00:18:10.105 ] 00:18:10.105 }' 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=3 00:18:10.105 05:09:09 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:10.105 05:09:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=37376 00:18:10.105 05:09:10 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:10.105 [global] 00:18:10.105 thread=1 00:18:10.105 invalidate=1 00:18:10.105 rw=randread 00:18:10.105 time_based=1 00:18:10.105 runtime=5 00:18:10.105 ioengine=libaio 00:18:10.105 direct=1 00:18:10.105 bs=1024 00:18:10.105 iodepth=128 00:18:10.105 norandommap=1 00:18:10.105 numjobs=1 00:18:10.105 00:18:10.105 [job0] 00:18:10.105 filename=/dev/sda 00:18:10.105 queue_depth set to 113 (sda) 00:18:10.105 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:10.105 fio-3.35 00:18:10.105 Starting 1 thread 00:18:15.396 00:18:15.396 job0: (groupid=0, jobs=1): err= 0: pid=87653: Tue Jul 23 05:09:15 2024 00:18:15.396 read: IOPS=40.2k, BW=39.3MiB/s (41.2MB/s)(196MiB/5003msec) 00:18:15.396 slat (nsec): min=1634, max=1440.7k, avg=23189.16, stdev=72278.37 00:18:15.396 clat (usec): min=1150, max=5852, avg=3159.09, stdev=141.99 00:18:15.396 lat (usec): min=1156, max=5854, avg=3182.28, stdev=123.70 00:18:15.396 clat percentiles (usec): 00:18:15.396 | 1.00th=[ 2769], 5.00th=[ 2933], 10.00th=[ 2999], 20.00th=[ 3064], 00:18:15.396 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 00:18:15.396 | 70.00th=[ 3228], 80.00th=[ 3261], 90.00th=[ 3326], 95.00th=[ 3359], 00:18:15.396 | 99.00th=[ 3458], 99.50th=[ 3523], 99.90th=[ 4080], 99.95th=[ 4228], 00:18:15.396 | 99.99th=[ 5407] 00:18:15.396 bw ( KiB/s): min=39872, max=40830, per=100.00%, avg=40300.22, stdev=343.92, samples=9 00:18:15.396 iops : min=39872, max=40830, avg=40300.22, stdev=343.92, samples=9 00:18:15.396 lat (msec) : 2=0.04%, 4=99.85%, 10=0.11% 00:18:15.396 cpu : usr=8.00%, sys=14.87%, ctx=115729, majf=0, minf=32 00:18:15.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:15.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:15.396 issued rwts: total=201162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:15.396 00:18:15.396 Run status group 0 (all jobs): 00:18:15.396 READ: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=196MiB (206MB), run=5003-5003msec 00:18:15.396 00:18:15.396 Disk stats (read/write): 00:18:15.396 sda: ios=196757/0, merge=0/0, ticks=534085/0, in_queue=534085, util=98.14% 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:15.396 "tick_rate": 2200000000, 00:18:15.396 "ticks": 2187777195942, 00:18:15.396 "bdevs": [ 00:18:15.396 { 00:18:15.396 "name": "Malloc0", 00:18:15.396 "bytes_read": 207100416, 00:18:15.396 "num_read_ops": 201219, 00:18:15.396 "bytes_written": 0, 00:18:15.396 "num_write_ops": 0, 00:18:15.396 "bytes_unmapped": 0, 00:18:15.396 "num_unmap_ops": 0, 00:18:15.396 "bytes_copied": 0, 00:18:15.396 "num_copy_ops": 0, 00:18:15.396 "read_latency_ticks": 54426050513, 00:18:15.396 "max_read_latency_ticks": 2443389, 00:18:15.396 "min_read_latency_ticks": 11678, 00:18:15.396 "write_latency_ticks": 0, 00:18:15.396 "max_write_latency_ticks": 0, 00:18:15.396 "min_write_latency_ticks": 0, 00:18:15.396 "unmap_latency_ticks": 0, 00:18:15.396 "max_unmap_latency_ticks": 0, 00:18:15.396 "min_unmap_latency_ticks": 0, 00:18:15.396 "copy_latency_ticks": 0, 00:18:15.396 "max_copy_latency_ticks": 0, 00:18:15.396 "min_copy_latency_ticks": 0, 00:18:15.396 "io_error": {} 00:18:15.396 } 00:18:15.396 ] 00:18:15.396 }' 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=201219 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=207100416 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=40243 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=41412608 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@90 -- # IOPS_LIMIT=20121 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@91 -- # BANDWIDTH_LIMIT=20706304 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@94 -- # READ_BANDWIDTH_LIMIT=10353152 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@98 -- # IOPS_LIMIT=20000 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@99 -- # BANDWIDTH_LIMIT_MB=19 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@100 -- # BANDWIDTH_LIMIT=19922944 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@101 -- # READ_BANDWIDTH_LIMIT_MB=9 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@102 -- # READ_BANDWIDTH_LIMIT=9437184 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@105 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 20000 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@106 -- # run_fio Malloc0 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.396 05:09:15 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:15.397 05:09:15 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.397 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:15.397 "tick_rate": 2200000000, 00:18:15.397 "ticks": 2188094040956, 00:18:15.397 "bdevs": [ 00:18:15.397 { 00:18:15.397 "name": "Malloc0", 00:18:15.397 "bytes_read": 207100416, 00:18:15.397 "num_read_ops": 201219, 00:18:15.397 "bytes_written": 0, 00:18:15.397 "num_write_ops": 0, 00:18:15.397 "bytes_unmapped": 0, 00:18:15.397 "num_unmap_ops": 0, 00:18:15.397 "bytes_copied": 0, 00:18:15.397 "num_copy_ops": 0, 00:18:15.397 "read_latency_ticks": 54426050513, 00:18:15.397 "max_read_latency_ticks": 2443389, 00:18:15.397 "min_read_latency_ticks": 11678, 00:18:15.397 "write_latency_ticks": 0, 00:18:15.397 "max_write_latency_ticks": 0, 00:18:15.397 "min_write_latency_ticks": 0, 00:18:15.397 "unmap_latency_ticks": 0, 00:18:15.397 "max_unmap_latency_ticks": 0, 00:18:15.397 "min_unmap_latency_ticks": 0, 00:18:15.397 "copy_latency_ticks": 0, 00:18:15.397 "max_copy_latency_ticks": 0, 00:18:15.397 "min_copy_latency_ticks": 0, 00:18:15.397 "io_error": {} 00:18:15.397 } 00:18:15.397 ] 00:18:15.397 }' 00:18:15.397 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:15.397 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=201219 00:18:15.397 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:15.655 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=207100416 00:18:15.655 05:09:15 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:15.655 [global] 00:18:15.655 thread=1 00:18:15.655 invalidate=1 00:18:15.655 rw=randread 00:18:15.655 time_based=1 00:18:15.655 runtime=5 00:18:15.655 ioengine=libaio 00:18:15.655 direct=1 00:18:15.655 bs=1024 00:18:15.655 iodepth=128 00:18:15.655 norandommap=1 00:18:15.655 numjobs=1 00:18:15.655 00:18:15.655 [job0] 00:18:15.655 filename=/dev/sda 00:18:15.655 queue_depth set to 113 (sda) 00:18:15.655 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:15.655 fio-3.35 00:18:15.655 Starting 1 thread 00:18:20.962 00:18:20.962 job0: (groupid=0, jobs=1): err= 0: pid=87746: Tue Jul 23 05:09:20 2024 00:18:20.962 read: IOPS=20.0k, BW=19.5MiB/s (20.5MB/s)(97.8MiB/5006msec) 00:18:20.962 slat (usec): min=2, max=1457, avg=47.32, stdev=176.88 00:18:20.962 clat (usec): min=935, max=11785, avg=6350.53, stdev=477.56 00:18:20.962 lat (usec): min=941, max=11789, avg=6397.85, stdev=486.98 00:18:20.962 clat percentiles (usec): 00:18:20.962 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5932], 20.00th=[ 6063], 00:18:20.962 | 30.00th=[ 6063], 40.00th=[ 6128], 50.00th=[ 6194], 60.00th=[ 6259], 00:18:20.962 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 6980], 95.00th=[ 7046], 00:18:20.962 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 7701], 99.95th=[ 9634], 00:18:20.962 | 99.99th=[11731] 00:18:20.962 bw ( KiB/s): min=20000, max=20040, per=100.00%, avg=20028.00, stdev=18.33, samples=9 00:18:20.962 iops : min=20000, max=20040, avg=20028.00, stdev=18.33, samples=9 00:18:20.962 lat (usec) : 1000=0.01% 00:18:20.962 lat (msec) : 2=0.04%, 4=0.04%, 10=99.88%, 20=0.04% 00:18:20.962 cpu : usr=5.99%, sys=11.03%, ctx=54726, majf=0, minf=32 00:18:20.962 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:20.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.962 issued rwts: total=100122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.962 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.962 00:18:20.962 Run status group 0 (all jobs): 00:18:20.962 READ: bw=19.5MiB/s (20.5MB/s), 19.5MiB/s-19.5MiB/s (20.5MB/s-20.5MB/s), io=97.8MiB (103MB), run=5006-5006msec 00:18:20.962 00:18:20.962 Disk stats (read/write): 00:18:20.962 sda: ios=97840/0, merge=0/0, ticks=533581/0, in_queue=533581, util=98.13% 00:18:20.962 05:09:20 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:20.962 05:09:20 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.962 05:09:20 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:20.962 05:09:20 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.962 05:09:20 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:20.962 "tick_rate": 2200000000, 00:18:20.962 "ticks": 2200080821988, 00:18:20.962 "bdevs": [ 00:18:20.962 { 00:18:20.962 "name": "Malloc0", 00:18:20.962 "bytes_read": 309625344, 00:18:20.962 "num_read_ops": 301341, 00:18:20.962 "bytes_written": 0, 00:18:20.962 "num_write_ops": 0, 00:18:20.962 "bytes_unmapped": 0, 00:18:20.962 "num_unmap_ops": 0, 00:18:20.962 "bytes_copied": 0, 00:18:20.962 "num_copy_ops": 0, 00:18:20.962 "read_latency_ticks": 627646309387, 00:18:20.962 "max_read_latency_ticks": 7894440, 00:18:20.962 "min_read_latency_ticks": 11678, 00:18:20.962 "write_latency_ticks": 0, 00:18:20.962 "max_write_latency_ticks": 0, 00:18:20.962 "min_write_latency_ticks": 0, 00:18:20.962 "unmap_latency_ticks": 0, 00:18:20.962 "max_unmap_latency_ticks": 0, 00:18:20.962 "min_unmap_latency_ticks": 0, 00:18:20.962 "copy_latency_ticks": 0, 00:18:20.962 "max_copy_latency_ticks": 0, 00:18:20.962 "min_copy_latency_ticks": 0, 00:18:20.962 "io_error": {} 00:18:20.962 } 00:18:20.962 ] 00:18:20.962 }' 00:18:20.962 05:09:20 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=301341 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=309625344 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=20024 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=20504985 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@107 -- # verify_qos_limits 20024 20000 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=20024 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=20000 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@110 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@111 -- # run_fio Malloc0 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:20.962 "tick_rate": 2200000000, 00:18:20.962 "ticks": 2200415355299, 00:18:20.962 "bdevs": [ 00:18:20.962 { 00:18:20.962 "name": "Malloc0", 00:18:20.962 "bytes_read": 309625344, 00:18:20.962 "num_read_ops": 301341, 00:18:20.962 "bytes_written": 0, 00:18:20.962 "num_write_ops": 0, 00:18:20.962 "bytes_unmapped": 0, 00:18:20.962 "num_unmap_ops": 0, 00:18:20.962 "bytes_copied": 0, 00:18:20.962 "num_copy_ops": 0, 00:18:20.962 "read_latency_ticks": 627646309387, 00:18:20.962 "max_read_latency_ticks": 7894440, 00:18:20.962 "min_read_latency_ticks": 11678, 00:18:20.962 "write_latency_ticks": 0, 00:18:20.962 "max_write_latency_ticks": 0, 00:18:20.962 "min_write_latency_ticks": 0, 00:18:20.962 "unmap_latency_ticks": 0, 00:18:20.962 "max_unmap_latency_ticks": 0, 00:18:20.962 "min_unmap_latency_ticks": 0, 00:18:20.962 "copy_latency_ticks": 0, 00:18:20.962 "max_copy_latency_ticks": 0, 00:18:20.962 "min_copy_latency_ticks": 0, 00:18:20.962 "io_error": {} 00:18:20.962 } 00:18:20.962 ] 00:18:20.962 }' 00:18:20.962 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:21.222 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=301341 00:18:21.222 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:21.222 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=309625344 00:18:21.222 05:09:21 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:21.222 [global] 00:18:21.222 thread=1 00:18:21.222 invalidate=1 00:18:21.222 rw=randread 00:18:21.222 time_based=1 00:18:21.222 runtime=5 00:18:21.222 ioengine=libaio 00:18:21.222 direct=1 00:18:21.222 bs=1024 00:18:21.222 iodepth=128 00:18:21.222 norandommap=1 00:18:21.222 numjobs=1 00:18:21.222 00:18:21.222 [job0] 00:18:21.222 filename=/dev/sda 00:18:21.222 queue_depth set to 113 (sda) 00:18:21.222 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:21.223 fio-3.35 00:18:21.223 Starting 1 thread 00:18:26.502 00:18:26.502 job0: (groupid=0, jobs=1): err= 0: pid=87830: Tue Jul 23 05:09:26 2024 00:18:26.502 read: IOPS=40.1k, BW=39.2MiB/s (41.1MB/s)(196MiB/5003msec) 00:18:26.502 slat (nsec): min=1562, max=713938, avg=23272.91, stdev=73375.55 00:18:26.502 clat (usec): min=1186, max=5580, avg=3164.10, stdev=163.16 00:18:26.502 lat (usec): min=1192, max=5586, avg=3187.38, stdev=147.19 00:18:26.502 clat percentiles (usec): 00:18:26.502 | 1.00th=[ 2737], 5.00th=[ 2868], 10.00th=[ 2966], 20.00th=[ 3064], 00:18:26.502 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 00:18:26.502 | 70.00th=[ 3228], 80.00th=[ 3294], 90.00th=[ 3359], 95.00th=[ 3425], 00:18:26.502 | 99.00th=[ 3556], 99.50th=[ 3621], 99.90th=[ 3720], 99.95th=[ 3785], 00:18:26.502 | 99.99th=[ 5211] 00:18:26.502 bw ( KiB/s): min=39456, max=40640, per=99.96%, avg=40125.11, stdev=377.37, samples=9 00:18:26.502 iops : min=39456, max=40640, avg=40125.11, stdev=377.37, samples=9 00:18:26.502 lat (msec) : 2=0.03%, 4=99.93%, 10=0.04% 00:18:26.502 cpu : usr=6.90%, sys=15.09%, ctx=112217, majf=0, minf=32 00:18:26.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:26.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:26.502 issued rwts: total=200835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:26.502 00:18:26.502 Run status group 0 (all jobs): 00:18:26.502 READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=196MiB (206MB), run=5003-5003msec 00:18:26.502 00:18:26.502 Disk stats (read/write): 00:18:26.502 sda: ios=196257/0, merge=0/0, ticks=534610/0, in_queue=534610, util=98.09% 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:26.502 "tick_rate": 2200000000, 00:18:26.502 "ticks": 2212400253452, 00:18:26.502 "bdevs": [ 00:18:26.502 { 00:18:26.502 "name": "Malloc0", 00:18:26.502 "bytes_read": 515280384, 00:18:26.502 "num_read_ops": 502176, 00:18:26.502 "bytes_written": 0, 00:18:26.502 "num_write_ops": 0, 00:18:26.502 "bytes_unmapped": 0, 00:18:26.502 "num_unmap_ops": 0, 00:18:26.502 "bytes_copied": 0, 00:18:26.502 "num_copy_ops": 0, 00:18:26.502 "read_latency_ticks": 682071249466, 00:18:26.502 "max_read_latency_ticks": 7894440, 00:18:26.502 "min_read_latency_ticks": 11678, 00:18:26.502 "write_latency_ticks": 0, 00:18:26.502 "max_write_latency_ticks": 0, 00:18:26.502 "min_write_latency_ticks": 0, 00:18:26.502 "unmap_latency_ticks": 0, 00:18:26.502 "max_unmap_latency_ticks": 0, 00:18:26.502 "min_unmap_latency_ticks": 0, 00:18:26.502 "copy_latency_ticks": 0, 00:18:26.502 "max_copy_latency_ticks": 0, 00:18:26.502 "min_copy_latency_ticks": 0, 00:18:26.502 "io_error": {} 00:18:26.502 } 00:18:26.502 ] 00:18:26.502 }' 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=502176 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=515280384 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=40167 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=41131008 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@112 -- # '[' 40167 -gt 20000 ']' 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@115 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 20000 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@116 -- # run_fio Malloc0 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.502 05:09:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:26.760 05:09:26 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.760 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:26.760 "tick_rate": 2200000000, 00:18:26.760 "ticks": 2212708245997, 00:18:26.760 "bdevs": [ 00:18:26.760 { 00:18:26.760 "name": "Malloc0", 00:18:26.760 "bytes_read": 515280384, 00:18:26.760 "num_read_ops": 502176, 00:18:26.760 "bytes_written": 0, 00:18:26.760 "num_write_ops": 0, 00:18:26.760 "bytes_unmapped": 0, 00:18:26.760 "num_unmap_ops": 0, 00:18:26.760 "bytes_copied": 0, 00:18:26.760 "num_copy_ops": 0, 00:18:26.760 "read_latency_ticks": 682071249466, 00:18:26.760 "max_read_latency_ticks": 7894440, 00:18:26.760 "min_read_latency_ticks": 11678, 00:18:26.760 "write_latency_ticks": 0, 00:18:26.760 "max_write_latency_ticks": 0, 00:18:26.760 "min_write_latency_ticks": 0, 00:18:26.760 "unmap_latency_ticks": 0, 00:18:26.760 "max_unmap_latency_ticks": 0, 00:18:26.760 "min_unmap_latency_ticks": 0, 00:18:26.760 "copy_latency_ticks": 0, 00:18:26.760 "max_copy_latency_ticks": 0, 00:18:26.760 "min_copy_latency_ticks": 0, 00:18:26.760 "io_error": {} 00:18:26.760 } 00:18:26.760 ] 00:18:26.760 }' 00:18:26.760 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:26.760 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=502176 00:18:26.760 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:26.760 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=515280384 00:18:26.760 05:09:26 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:26.760 [global] 00:18:26.760 thread=1 00:18:26.760 invalidate=1 00:18:26.760 rw=randread 00:18:26.760 time_based=1 00:18:26.760 runtime=5 00:18:26.760 ioengine=libaio 00:18:26.760 direct=1 00:18:26.760 bs=1024 00:18:26.760 iodepth=128 00:18:26.760 norandommap=1 00:18:26.760 numjobs=1 00:18:26.760 00:18:26.760 [job0] 00:18:26.760 filename=/dev/sda 00:18:26.760 queue_depth set to 113 (sda) 00:18:27.018 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:27.018 fio-3.35 00:18:27.018 Starting 1 thread 00:18:32.284 00:18:32.284 job0: (groupid=0, jobs=1): err= 0: pid=87921: Tue Jul 23 05:09:32 2024 00:18:32.284 read: IOPS=20.0k, BW=19.5MiB/s (20.5MB/s)(97.8MiB/5006msec) 00:18:32.284 slat (nsec): min=1689, max=2204.2k, avg=47678.92, stdev=174704.50 00:18:32.284 clat (usec): min=1196, max=11743, avg=6349.07, stdev=449.52 00:18:32.284 lat (usec): min=1202, max=11751, avg=6396.75, stdev=459.76 00:18:32.284 clat percentiles (usec): 00:18:32.284 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 5997], 20.00th=[ 6063], 00:18:32.284 | 30.00th=[ 6063], 40.00th=[ 6128], 50.00th=[ 6128], 60.00th=[ 6194], 00:18:32.284 | 70.00th=[ 6783], 80.00th=[ 6849], 90.00th=[ 6980], 95.00th=[ 6980], 00:18:32.284 | 99.00th=[ 7111], 99.50th=[ 7177], 99.90th=[ 7439], 99.95th=[ 8848], 00:18:32.284 | 99.99th=[10945] 00:18:32.284 bw ( KiB/s): min=20000, max=20040, per=100.00%, avg=20027.56, stdev=16.21, samples=9 00:18:32.284 iops : min=20000, max=20040, avg=20027.56, stdev=16.21, samples=9 00:18:32.284 lat (msec) : 2=0.06%, 4=0.04%, 10=99.87%, 20=0.03% 00:18:32.284 cpu : usr=5.03%, sys=10.35%, ctx=72687, majf=0, minf=32 00:18:32.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:32.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.284 issued rwts: total=100132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.284 00:18:32.284 Run status group 0 (all jobs): 00:18:32.284 READ: bw=19.5MiB/s (20.5MB/s), 19.5MiB/s-19.5MiB/s (20.5MB/s-20.5MB/s), io=97.8MiB (103MB), run=5006-5006msec 00:18:32.284 00:18:32.284 Disk stats (read/write): 00:18:32.284 sda: ios=97818/0, merge=0/0, ticks=525937/0, in_queue=525937, util=98.11% 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:32.284 "tick_rate": 2200000000, 00:18:32.284 "ticks": 2224686611698, 00:18:32.284 "bdevs": [ 00:18:32.284 { 00:18:32.284 "name": "Malloc0", 00:18:32.284 "bytes_read": 617815552, 00:18:32.284 "num_read_ops": 602308, 00:18:32.284 "bytes_written": 0, 00:18:32.284 "num_write_ops": 0, 00:18:32.284 "bytes_unmapped": 0, 00:18:32.284 "num_unmap_ops": 0, 00:18:32.284 "bytes_copied": 0, 00:18:32.284 "num_copy_ops": 0, 00:18:32.284 "read_latency_ticks": 1274022846930, 00:18:32.284 "max_read_latency_ticks": 9453727, 00:18:32.284 "min_read_latency_ticks": 11678, 00:18:32.284 "write_latency_ticks": 0, 00:18:32.284 "max_write_latency_ticks": 0, 00:18:32.284 "min_write_latency_ticks": 0, 00:18:32.284 "unmap_latency_ticks": 0, 00:18:32.284 "max_unmap_latency_ticks": 0, 00:18:32.284 "min_unmap_latency_ticks": 0, 00:18:32.284 "copy_latency_ticks": 0, 00:18:32.284 "max_copy_latency_ticks": 0, 00:18:32.284 "min_copy_latency_ticks": 0, 00:18:32.284 "io_error": {} 00:18:32.284 } 00:18:32.284 ] 00:18:32.284 }' 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=602308 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=617815552 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=20026 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=20507033 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@117 -- # verify_qos_limits 20026 20000 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=20026 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=20000 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:18:32.284 I/O rate limiting tests successful 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@119 -- # echo 'I/O rate limiting tests successful' 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@122 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 --rw_mbytes_per_sec 19 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@123 -- # run_fio Malloc0 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:32.284 05:09:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.285 05:09:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:32.285 05:09:32 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.285 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:32.285 "tick_rate": 2200000000, 00:18:32.285 "ticks": 2225003538056, 00:18:32.285 "bdevs": [ 00:18:32.285 { 00:18:32.285 "name": "Malloc0", 00:18:32.285 "bytes_read": 617815552, 00:18:32.285 "num_read_ops": 602308, 00:18:32.285 "bytes_written": 0, 00:18:32.285 "num_write_ops": 0, 00:18:32.285 "bytes_unmapped": 0, 00:18:32.285 "num_unmap_ops": 0, 00:18:32.285 "bytes_copied": 0, 00:18:32.285 "num_copy_ops": 0, 00:18:32.285 "read_latency_ticks": 1274022846930, 00:18:32.285 "max_read_latency_ticks": 9453727, 00:18:32.285 "min_read_latency_ticks": 11678, 00:18:32.285 "write_latency_ticks": 0, 00:18:32.285 "max_write_latency_ticks": 0, 00:18:32.285 "min_write_latency_ticks": 0, 00:18:32.285 "unmap_latency_ticks": 0, 00:18:32.285 "max_unmap_latency_ticks": 0, 00:18:32.285 "min_unmap_latency_ticks": 0, 00:18:32.285 "copy_latency_ticks": 0, 00:18:32.285 "max_copy_latency_ticks": 0, 00:18:32.285 "min_copy_latency_ticks": 0, 00:18:32.285 "io_error": {} 00:18:32.285 } 00:18:32.285 ] 00:18:32.285 }' 00:18:32.285 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:32.285 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=602308 00:18:32.285 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:32.285 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=617815552 00:18:32.285 05:09:32 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:32.285 [global] 00:18:32.285 thread=1 00:18:32.285 invalidate=1 00:18:32.285 rw=randread 00:18:32.285 time_based=1 00:18:32.285 runtime=5 00:18:32.285 ioengine=libaio 00:18:32.285 direct=1 00:18:32.285 bs=1024 00:18:32.285 iodepth=128 00:18:32.285 norandommap=1 00:18:32.285 numjobs=1 00:18:32.285 00:18:32.285 [job0] 00:18:32.285 filename=/dev/sda 00:18:32.285 queue_depth set to 113 (sda) 00:18:32.544 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:32.544 fio-3.35 00:18:32.544 Starting 1 thread 00:18:37.813 00:18:37.813 job0: (groupid=0, jobs=1): err= 0: pid=88011: Tue Jul 23 05:09:37 2024 00:18:37.813 read: IOPS=19.5k, BW=19.0MiB/s (19.9MB/s)(95.1MiB/5005msec) 00:18:37.813 slat (usec): min=2, max=1729, avg=48.78, stdev=193.45 00:18:37.813 clat (usec): min=1005, max=11901, avg=6528.59, stdev=558.62 00:18:37.813 lat (usec): min=1011, max=11905, avg=6577.37, stdev=555.85 00:18:37.813 clat percentiles (usec): 00:18:37.813 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 6063], 00:18:37.813 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6587], 60.00th=[ 6718], 00:18:37.813 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:18:37.813 | 99.00th=[ 7701], 99.50th=[ 7832], 99.90th=[ 8225], 99.95th=[ 9110], 00:18:37.813 | 99.99th=[10683] 00:18:37.813 bw ( KiB/s): min=19427, max=19532, per=100.00%, avg=19473.67, stdev=31.91, samples=9 00:18:37.813 iops : min=19427, max=19532, avg=19473.67, stdev=31.91, samples=9 00:18:37.813 lat (msec) : 2=0.03%, 4=0.04%, 10=99.89%, 20=0.05% 00:18:37.813 cpu : usr=5.10%, sys=11.53%, ctx=52823, majf=0, minf=32 00:18:37.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:37.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:37.814 issued rwts: total=97374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:37.814 00:18:37.814 Run status group 0 (all jobs): 00:18:37.814 READ: bw=19.0MiB/s (19.9MB/s), 19.0MiB/s-19.0MiB/s (19.9MB/s-19.9MB/s), io=95.1MiB (99.7MB), run=5005-5005msec 00:18:37.814 00:18:37.814 Disk stats (read/write): 00:18:37.814 sda: ios=95175/0, merge=0/0, ticks=532411/0, in_queue=532411, util=98.13% 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:37.814 "tick_rate": 2200000000, 00:18:37.814 "ticks": 2237004886116, 00:18:37.814 "bdevs": [ 00:18:37.814 { 00:18:37.814 "name": "Malloc0", 00:18:37.814 "bytes_read": 717526528, 00:18:37.814 "num_read_ops": 699682, 00:18:37.814 "bytes_written": 0, 00:18:37.814 "num_write_ops": 0, 00:18:37.814 "bytes_unmapped": 0, 00:18:37.814 "num_unmap_ops": 0, 00:18:37.814 "bytes_copied": 0, 00:18:37.814 "num_copy_ops": 0, 00:18:37.814 "read_latency_ticks": 1828884513739, 00:18:37.814 "max_read_latency_ticks": 9453727, 00:18:37.814 "min_read_latency_ticks": 11678, 00:18:37.814 "write_latency_ticks": 0, 00:18:37.814 "max_write_latency_ticks": 0, 00:18:37.814 "min_write_latency_ticks": 0, 00:18:37.814 "unmap_latency_ticks": 0, 00:18:37.814 "max_unmap_latency_ticks": 0, 00:18:37.814 "min_unmap_latency_ticks": 0, 00:18:37.814 "copy_latency_ticks": 0, 00:18:37.814 "max_copy_latency_ticks": 0, 00:18:37.814 "min_copy_latency_ticks": 0, 00:18:37.814 "io_error": {} 00:18:37.814 } 00:18:37.814 ] 00:18:37.814 }' 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=699682 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=717526528 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=19474 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=19942195 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@124 -- # verify_qos_limits 19942195 19922944 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=19942195 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=19922944 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@127 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 0 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@128 -- # run_fio Malloc0 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:37.814 "tick_rate": 2200000000, 00:18:37.814 "ticks": 2237330554144, 00:18:37.814 "bdevs": [ 00:18:37.814 { 00:18:37.814 "name": "Malloc0", 00:18:37.814 "bytes_read": 717526528, 00:18:37.814 "num_read_ops": 699682, 00:18:37.814 "bytes_written": 0, 00:18:37.814 "num_write_ops": 0, 00:18:37.814 "bytes_unmapped": 0, 00:18:37.814 "num_unmap_ops": 0, 00:18:37.814 "bytes_copied": 0, 00:18:37.814 "num_copy_ops": 0, 00:18:37.814 "read_latency_ticks": 1828884513739, 00:18:37.814 "max_read_latency_ticks": 9453727, 00:18:37.814 "min_read_latency_ticks": 11678, 00:18:37.814 "write_latency_ticks": 0, 00:18:37.814 "max_write_latency_ticks": 0, 00:18:37.814 "min_write_latency_ticks": 0, 00:18:37.814 "unmap_latency_ticks": 0, 00:18:37.814 "max_unmap_latency_ticks": 0, 00:18:37.814 "min_unmap_latency_ticks": 0, 00:18:37.814 "copy_latency_ticks": 0, 00:18:37.814 "max_copy_latency_ticks": 0, 00:18:37.814 "min_copy_latency_ticks": 0, 00:18:37.814 "io_error": {} 00:18:37.814 } 00:18:37.814 ] 00:18:37.814 }' 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=699682 00:18:37.814 05:09:37 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:37.814 05:09:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=717526528 00:18:37.814 05:09:38 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:38.084 [global] 00:18:38.084 thread=1 00:18:38.084 invalidate=1 00:18:38.084 rw=randread 00:18:38.084 time_based=1 00:18:38.084 runtime=5 00:18:38.084 ioengine=libaio 00:18:38.084 direct=1 00:18:38.084 bs=1024 00:18:38.084 iodepth=128 00:18:38.084 norandommap=1 00:18:38.084 numjobs=1 00:18:38.084 00:18:38.084 [job0] 00:18:38.084 filename=/dev/sda 00:18:38.084 queue_depth set to 113 (sda) 00:18:38.084 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:38.084 fio-3.35 00:18:38.084 Starting 1 thread 00:18:43.363 00:18:43.363 job0: (groupid=0, jobs=1): err= 0: pid=88105: Tue Jul 23 05:09:43 2024 00:18:43.363 read: IOPS=40.5k, BW=39.5MiB/s (41.4MB/s)(198MiB/5003msec) 00:18:43.363 slat (nsec): min=1763, max=517837, avg=23002.02, stdev=71780.70 00:18:43.363 clat (usec): min=858, max=6065, avg=3139.45, stdev=127.71 00:18:43.363 lat (usec): min=866, max=6068, avg=3162.45, stdev=106.98 00:18:43.363 clat percentiles (usec): 00:18:43.363 | 1.00th=[ 2769], 5.00th=[ 2933], 10.00th=[ 3032], 20.00th=[ 3064], 00:18:43.363 | 30.00th=[ 3097], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3163], 00:18:43.363 | 70.00th=[ 3195], 80.00th=[ 3228], 90.00th=[ 3294], 95.00th=[ 3326], 00:18:43.363 | 99.00th=[ 3458], 99.50th=[ 3490], 99.90th=[ 3589], 99.95th=[ 3687], 00:18:43.363 | 99.99th=[ 5604] 00:18:43.363 bw ( KiB/s): min=40000, max=41182, per=100.00%, avg=40544.44, stdev=406.81, samples=9 00:18:43.363 iops : min=40000, max=41182, avg=40544.44, stdev=406.81, samples=9 00:18:43.363 lat (usec) : 1000=0.01% 00:18:43.363 lat (msec) : 2=0.02%, 4=99.94%, 10=0.04% 00:18:43.363 cpu : usr=8.54%, sys=14.11%, ctx=113834, majf=0, minf=32 00:18:43.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:43.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:43.363 issued rwts: total=202435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:43.363 00:18:43.363 Run status group 0 (all jobs): 00:18:43.363 READ: bw=39.5MiB/s (41.4MB/s), 39.5MiB/s-39.5MiB/s (41.4MB/s-41.4MB/s), io=198MiB (207MB), run=5003-5003msec 00:18:43.363 00:18:43.363 Disk stats (read/write): 00:18:43.363 sda: ios=197984/0, merge=0/0, ticks=534746/0, in_queue=534746, util=98.11% 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:43.363 "tick_rate": 2200000000, 00:18:43.363 "ticks": 2249310320368, 00:18:43.363 "bdevs": [ 00:18:43.363 { 00:18:43.363 "name": "Malloc0", 00:18:43.363 "bytes_read": 924819968, 00:18:43.363 "num_read_ops": 902117, 00:18:43.363 "bytes_written": 0, 00:18:43.363 "num_write_ops": 0, 00:18:43.363 "bytes_unmapped": 0, 00:18:43.363 "num_unmap_ops": 0, 00:18:43.363 "bytes_copied": 0, 00:18:43.363 "num_copy_ops": 0, 00:18:43.363 "read_latency_ticks": 1883866892129, 00:18:43.363 "max_read_latency_ticks": 9453727, 00:18:43.363 "min_read_latency_ticks": 11678, 00:18:43.363 "write_latency_ticks": 0, 00:18:43.363 "max_write_latency_ticks": 0, 00:18:43.363 "min_write_latency_ticks": 0, 00:18:43.363 "unmap_latency_ticks": 0, 00:18:43.363 "max_unmap_latency_ticks": 0, 00:18:43.363 "min_unmap_latency_ticks": 0, 00:18:43.363 "copy_latency_ticks": 0, 00:18:43.363 "max_copy_latency_ticks": 0, 00:18:43.363 "min_copy_latency_ticks": 0, 00:18:43.363 "io_error": {} 00:18:43.363 } 00:18:43.363 ] 00:18:43.363 }' 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=902117 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=924819968 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=40487 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=41458688 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@129 -- # '[' 41458688 -gt 19922944 ']' 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@132 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 19 --r_mbytes_per_sec 9 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@133 -- # run_fio Malloc0 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:43.363 "tick_rate": 2200000000, 00:18:43.363 "ticks": 2249609531973, 00:18:43.363 "bdevs": [ 00:18:43.363 { 00:18:43.363 "name": "Malloc0", 00:18:43.363 "bytes_read": 924819968, 00:18:43.363 "num_read_ops": 902117, 00:18:43.363 "bytes_written": 0, 00:18:43.363 "num_write_ops": 0, 00:18:43.363 "bytes_unmapped": 0, 00:18:43.363 "num_unmap_ops": 0, 00:18:43.363 "bytes_copied": 0, 00:18:43.363 "num_copy_ops": 0, 00:18:43.363 "read_latency_ticks": 1883866892129, 00:18:43.363 "max_read_latency_ticks": 9453727, 00:18:43.363 "min_read_latency_ticks": 11678, 00:18:43.363 "write_latency_ticks": 0, 00:18:43.363 "max_write_latency_ticks": 0, 00:18:43.363 "min_write_latency_ticks": 0, 00:18:43.363 "unmap_latency_ticks": 0, 00:18:43.363 "max_unmap_latency_ticks": 0, 00:18:43.363 "min_unmap_latency_ticks": 0, 00:18:43.363 "copy_latency_ticks": 0, 00:18:43.363 "max_copy_latency_ticks": 0, 00:18:43.363 "min_copy_latency_ticks": 0, 00:18:43.363 "io_error": {} 00:18:43.363 } 00:18:43.363 ] 00:18:43.363 }' 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=902117 00:18:43.363 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:43.622 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=924819968 00:18:43.622 05:09:43 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:43.622 [global] 00:18:43.622 thread=1 00:18:43.622 invalidate=1 00:18:43.622 rw=randread 00:18:43.622 time_based=1 00:18:43.622 runtime=5 00:18:43.622 ioengine=libaio 00:18:43.622 direct=1 00:18:43.622 bs=1024 00:18:43.622 iodepth=128 00:18:43.622 norandommap=1 00:18:43.622 numjobs=1 00:18:43.622 00:18:43.622 [job0] 00:18:43.622 filename=/dev/sda 00:18:43.622 queue_depth set to 113 (sda) 00:18:43.622 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:43.622 fio-3.35 00:18:43.622 Starting 1 thread 00:18:48.904 00:18:48.904 job0: (groupid=0, jobs=1): err= 0: pid=88186: Tue Jul 23 05:09:48 2024 00:18:48.904 read: IOPS=9214, BW=9215KiB/s (9436kB/s)(45.1MiB/5013msec) 00:18:48.904 slat (nsec): min=1918, max=2113.0k, avg=104791.85, stdev=281131.68 00:18:48.904 clat (usec): min=2150, max=26031, avg=13780.00, stdev=679.06 00:18:48.904 lat (usec): min=2165, max=26035, avg=13884.79, stdev=638.32 00:18:48.904 clat percentiles (usec): 00:18:48.904 | 1.00th=[12518], 5.00th=[13042], 10.00th=[13173], 20.00th=[13304], 00:18:48.904 | 30.00th=[13698], 40.00th=[13829], 50.00th=[13960], 60.00th=[13960], 00:18:48.904 | 70.00th=[14091], 80.00th=[14091], 90.00th=[14222], 95.00th=[14222], 00:18:48.904 | 99.00th=[14615], 99.50th=[14877], 99.90th=[21103], 99.95th=[23987], 00:18:48.904 | 99.99th=[26084] 00:18:48.904 bw ( KiB/s): min= 9098, max= 9252, per=99.98%, avg=9213.40, stdev=42.54, samples=10 00:18:48.904 iops : min= 9098, max= 9252, avg=9213.40, stdev=42.54, samples=10 00:18:48.904 lat (msec) : 4=0.07%, 10=0.12%, 20=99.69%, 50=0.12% 00:18:48.904 cpu : usr=3.87%, sys=7.04%, ctx=27881, majf=0, minf=32 00:18:48.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:48.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:48.904 issued rwts: total=46194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.904 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:48.904 00:18:48.904 Run status group 0 (all jobs): 00:18:48.904 READ: bw=9215KiB/s (9436kB/s), 9215KiB/s-9215KiB/s (9436kB/s-9436kB/s), io=45.1MiB (47.3MB), run=5013-5013msec 00:18:48.904 00:18:48.904 Disk stats (read/write): 00:18:48.904 sda: ios=45066/0, merge=0/0, ticks=545558/0, in_queue=545558, util=98.11% 00:18:48.904 05:09:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:48.904 05:09:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.904 05:09:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:48.904 05:09:48 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.904 05:09:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:48.904 "tick_rate": 2200000000, 00:18:48.904 "ticks": 2261561943998, 00:18:48.904 "bdevs": [ 00:18:48.904 { 00:18:48.904 "name": "Malloc0", 00:18:48.904 "bytes_read": 972122624, 00:18:48.904 "num_read_ops": 948311, 00:18:48.904 "bytes_written": 0, 00:18:48.904 "num_write_ops": 0, 00:18:48.904 "bytes_unmapped": 0, 00:18:48.904 "num_unmap_ops": 0, 00:18:48.904 "bytes_copied": 0, 00:18:48.904 "num_copy_ops": 0, 00:18:48.904 "read_latency_ticks": 2531932703231, 00:18:48.904 "max_read_latency_ticks": 16207780, 00:18:48.904 "min_read_latency_ticks": 11678, 00:18:48.904 "write_latency_ticks": 0, 00:18:48.904 "max_write_latency_ticks": 0, 00:18:48.904 "min_write_latency_ticks": 0, 00:18:48.904 "unmap_latency_ticks": 0, 00:18:48.904 "max_unmap_latency_ticks": 0, 00:18:48.904 "min_unmap_latency_ticks": 0, 00:18:48.904 "copy_latency_ticks": 0, 00:18:48.904 "max_copy_latency_ticks": 0, 00:18:48.904 "min_copy_latency_ticks": 0, 00:18:48.904 "io_error": {} 00:18:48.904 } 00:18:48.904 ] 00:18:48.904 }' 00:18:48.904 05:09:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:48.904 05:09:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=948311 00:18:48.904 05:09:48 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=972122624 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=9238 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=9460531 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@134 -- # verify_qos_limits 9460531 9437184 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=9460531 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=9437184 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:18:48.904 I/O bandwidth limiting tests successful 00:18:48.904 Cleaning up iSCSI connection 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@136 -- # echo 'I/O bandwidth limiting tests successful' 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@138 -- # iscsicleanup 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:18:48.904 Logging out of session [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:18:48.904 Logout of [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@983 -- # rm -rf 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@139 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:Target1 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@141 -- # rm -f ./local-job0-0-verify.state 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@143 -- # killprocess 87572 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@948 -- # '[' -z 87572 ']' 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@952 -- # kill -0 87572 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # uname 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.904 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87572 00:18:49.163 killing process with pid 87572 00:18:49.163 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:49.163 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:49.163 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87572' 00:18:49.163 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@967 -- # kill 87572 00:18:49.163 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@972 -- # wait 87572 00:18:49.422 05:09:49 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@145 -- # iscsitestfini 00:18:49.422 05:09:49 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:18:49.422 ************************************ 00:18:49.422 END TEST iscsi_tgt_qos 00:18:49.422 ************************************ 00:18:49.422 00:18:49.422 real 0m41.794s 00:18:49.422 user 0m35.169s 00:18:49.422 sys 0m12.689s 00:18:49.422 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.422 05:09:49 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:49.422 05:09:49 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:18:49.422 05:09:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@39 -- # run_test iscsi_tgt_ip_migration /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:18:49.422 05:09:49 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:49.422 05:09:49 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.422 05:09:49 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:18:49.422 ************************************ 00:18:49.422 START TEST iscsi_tgt_ip_migration 00:18:49.422 ************************************ 00:18:49.422 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:18:49.680 * Looking for test storage... 00:18:49.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@11 -- # iscsitestinit 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@13 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@14 -- # pids=() 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@16 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:49.680 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:49.681 Running ip migration tests 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:49.681 #define SPDK_CONFIG_H 00:18:49.681 #define SPDK_CONFIG_APPS 1 00:18:49.681 #define SPDK_CONFIG_ARCH native 00:18:49.681 #undef SPDK_CONFIG_ASAN 00:18:49.681 #undef SPDK_CONFIG_AVAHI 00:18:49.681 #undef SPDK_CONFIG_CET 00:18:49.681 #define SPDK_CONFIG_COVERAGE 1 00:18:49.681 #define SPDK_CONFIG_CROSS_PREFIX 00:18:49.681 #undef SPDK_CONFIG_CRYPTO 00:18:49.681 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:49.681 #undef SPDK_CONFIG_CUSTOMOCF 00:18:49.681 #undef SPDK_CONFIG_DAOS 00:18:49.681 #define SPDK_CONFIG_DAOS_DIR 00:18:49.681 #define SPDK_CONFIG_DEBUG 1 00:18:49.681 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:49.681 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:18:49.681 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:18:49.681 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:18:49.681 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:49.681 #undef SPDK_CONFIG_DPDK_UADK 00:18:49.681 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:49.681 #define SPDK_CONFIG_EXAMPLES 1 00:18:49.681 #undef SPDK_CONFIG_FC 00:18:49.681 #define SPDK_CONFIG_FC_PATH 00:18:49.681 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:49.681 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:49.681 #undef SPDK_CONFIG_FUSE 00:18:49.681 #undef SPDK_CONFIG_FUZZER 00:18:49.681 #define SPDK_CONFIG_FUZZER_LIB 00:18:49.681 #undef SPDK_CONFIG_GOLANG 00:18:49.681 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:18:49.681 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:18:49.681 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:49.681 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:18:49.681 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:49.681 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:49.681 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:18:49.681 #define SPDK_CONFIG_IDXD 1 00:18:49.681 #define SPDK_CONFIG_IDXD_KERNEL 1 00:18:49.681 #undef SPDK_CONFIG_IPSEC_MB 00:18:49.681 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:49.681 #define SPDK_CONFIG_ISAL 1 00:18:49.681 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:18:49.681 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:18:49.681 #define SPDK_CONFIG_LIBDIR 00:18:49.681 #undef SPDK_CONFIG_LTO 00:18:49.681 #define SPDK_CONFIG_MAX_LCORES 128 00:18:49.681 #define SPDK_CONFIG_NVME_CUSE 1 00:18:49.681 #undef SPDK_CONFIG_OCF 00:18:49.681 #define SPDK_CONFIG_OCF_PATH 00:18:49.681 #define SPDK_CONFIG_OPENSSL_PATH 00:18:49.681 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:49.681 #define SPDK_CONFIG_PGO_DIR 00:18:49.681 #undef SPDK_CONFIG_PGO_USE 00:18:49.681 #define SPDK_CONFIG_PREFIX /usr/local 00:18:49.681 #undef SPDK_CONFIG_RAID5F 00:18:49.681 #define SPDK_CONFIG_RBD 1 00:18:49.681 #define SPDK_CONFIG_RDMA 1 00:18:49.681 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:49.681 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:49.681 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:18:49.681 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:49.681 #define SPDK_CONFIG_SHARED 1 00:18:49.681 #undef SPDK_CONFIG_SMA 00:18:49.681 #define SPDK_CONFIG_TESTS 1 00:18:49.681 #undef SPDK_CONFIG_TSAN 00:18:49.681 #define SPDK_CONFIG_UBLK 1 00:18:49.681 #define SPDK_CONFIG_UBSAN 1 00:18:49.681 #undef SPDK_CONFIG_UNIT_TESTS 00:18:49.681 #undef SPDK_CONFIG_URING 00:18:49.681 #define SPDK_CONFIG_URING_PATH 00:18:49.681 #undef SPDK_CONFIG_URING_ZNS 00:18:49.681 #undef SPDK_CONFIG_USDT 00:18:49.681 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:49.681 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:49.681 #undef SPDK_CONFIG_VFIO_USER 00:18:49.681 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:49.681 #define SPDK_CONFIG_VHOST 1 00:18:49.681 #define SPDK_CONFIG_VIRTIO 1 00:18:49.681 #undef SPDK_CONFIG_VTUNE 00:18:49.681 #define SPDK_CONFIG_VTUNE_DIR 00:18:49.681 #define SPDK_CONFIG_WERROR 1 00:18:49.681 #define SPDK_CONFIG_WPDK_DIR 00:18:49.681 #undef SPDK_CONFIG_XNVME 00:18:49.681 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@17 -- # NETMASK=127.0.0.0/24 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@18 -- # MIGRATION_ADDRESS=127.0.0.2 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@56 -- # echo 'Running ip migration tests' 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@57 -- # timing_enter start_iscsi_tgt_0 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@58 -- # rpc_first_addr=/var/tmp/spdk0.sock 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@59 -- # iscsi_tgt_start /var/tmp/spdk0.sock 1 00:18:49.681 Process pid: 88317 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=88317 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 88317' 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -m 1 --wait-for-rpc 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:18:49.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 88317 /var/tmp/spdk0.sock 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 88317 ']' 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.681 05:09:49 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:49.681 [2024-07-23 05:09:49.751424] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:18:49.681 [2024-07-23 05:09:49.751537] [ DPDK EAL parameters: iscsi --no-shconf -c 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88317 ] 00:18:49.681 [2024-07-23 05:09:49.889975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.939 [2024-07-23 05:09:49.979607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.506 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.506 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:18:50.506 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_set_options -o 30 -a 64 00:18:50.506 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.506 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:50.506 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.506 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk0.sock framework_start_init 00:18:50.506 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.506 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:50.764 iscsi_tgt is listening. Running tests... 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk0.sock bdev_malloc_create 64 512 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:50.764 Malloc0 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@60 -- # timing_exit start_iscsi_tgt_0 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:50.764 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@62 -- # timing_enter start_iscsi_tgt_1 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:51.022 Process pid: 88350 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@63 -- # rpc_second_addr=/var/tmp/spdk1.sock 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@64 -- # iscsi_tgt_start /var/tmp/spdk1.sock 2 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=88350 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 88350' 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -m 2 --wait-for-rpc 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 88350 /var/tmp/spdk1.sock 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 88350 ']' 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.022 05:09:50 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:51.022 [2024-07-23 05:09:51.073614] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:18:51.022 [2024-07-23 05:09:51.073962] [ DPDK EAL parameters: iscsi --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88350 ] 00:18:51.022 [2024-07-23 05:09:51.213957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.280 [2024-07-23 05:09:51.317798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.847 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.847 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:18:51.847 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_set_options -o 30 -a 64 00:18:51.847 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.847 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:51.847 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.847 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk1.sock framework_start_init 00:18:51.847 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.847 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:52.104 iscsi_tgt is listening. Running tests... 00:18:52.104 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.104 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:18:52.105 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:18:52.105 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.105 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:52.105 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.105 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk1.sock bdev_malloc_create 64 512 00:18:52.105 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.105 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:52.105 Malloc0 00:18:52.105 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.105 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:18:52.105 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@65 -- # timing_exit start_iscsi_tgt_1 00:18:52.105 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:52.105 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:52.422 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@67 -- # rpc_add_target_node /var/tmp/spdk0.sock 00:18:52.422 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:18:52.422 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:18:52.422 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.422 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:52.422 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.422 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:18:52.422 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.422 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:52.422 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.422 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:18:52.422 05:09:52 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@69 -- # sleep 1 00:18:53.354 05:09:53 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@70 -- # iscsiadm -m discovery -t sendtargets -p 127.0.0.2:3260 00:18:53.354 127.0.0.2:3260,1 iqn.2016-06.io.spdk:target1 00:18:53.354 05:09:53 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@71 -- # sleep 1 00:18:54.286 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@72 -- # iscsiadm -m node --login -p 127.0.0.2:3260 00:18:54.286 Logging in to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:18:54.286 Login to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:18:54.286 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@73 -- # waitforiscsidevices 1 00:18:54.287 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@116 -- # local num=1 00:18:54.287 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:18:54.287 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:18:54.287 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:18:54.287 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:18:54.287 [2024-07-23 05:09:54.408912] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:54.287 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # n=1 00:18:54.287 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:18:54.287 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@123 -- # return 0 00:18:54.287 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@77 -- # fiopid=88428 00:18:54.287 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@78 -- # sleep 3 00:18:54.287 05:09:54 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 32 -t randrw -r 12 00:18:54.287 [global] 00:18:54.287 thread=1 00:18:54.287 invalidate=1 00:18:54.287 rw=randrw 00:18:54.287 time_based=1 00:18:54.287 runtime=12 00:18:54.287 ioengine=libaio 00:18:54.287 direct=1 00:18:54.287 bs=4096 00:18:54.287 iodepth=32 00:18:54.287 norandommap=1 00:18:54.287 numjobs=1 00:18:54.287 00:18:54.287 [job0] 00:18:54.287 filename=/dev/sda 00:18:54.287 queue_depth set to 113 (sda) 00:18:54.544 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 00:18:54.544 fio-3.35 00:18:54.544 Starting 1 thread 00:18:54.544 [2024-07-23 05:09:54.574741] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@80 -- # rpc_cmd -s /var/tmp/spdk0.sock spdk_kill_instance SIGTERM 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@81 -- # wait 88317 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@83 -- # rpc_add_target_node /var/tmp/spdk1.sock 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:18:57.829 05:09:57 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@85 -- # wait 88428 00:19:07.798 [2024-07-23 05:10:06.684578] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:07.798 00:19:07.798 job0: (groupid=0, jobs=1): err= 0: pid=88456: Tue Jul 23 05:10:06 2024 00:19:07.798 read: IOPS=14.4k, BW=56.2MiB/s (58.9MB/s)(674MiB/12001msec) 00:19:07.798 slat (nsec): min=2215, max=56347, avg=5121.62, stdev=4406.31 00:19:07.798 clat (usec): min=288, max=2006.8k, avg=1119.15, stdev=19312.63 00:19:07.798 lat (usec): min=300, max=2006.8k, avg=1124.27, stdev=19312.60 00:19:07.798 clat percentiles (usec): 00:19:07.798 | 1.00th=[ 619], 5.00th=[ 717], 10.00th=[ 766], 20.00th=[ 832], 00:19:07.798 | 30.00th=[ 873], 40.00th=[ 898], 50.00th=[ 914], 60.00th=[ 938], 00:19:07.798 | 70.00th=[ 971], 80.00th=[ 1037], 90.00th=[ 1139], 95.00th=[ 1221], 00:19:07.798 | 99.00th=[ 1303], 99.50th=[ 1336], 99.90th=[ 1385], 99.95th=[ 1401], 00:19:07.798 | 99.99th=[ 1483] 00:19:07.798 bw ( KiB/s): min=32904, max=71568, per=100.00%, avg=65666.95, stdev=10746.96, samples=20 00:19:07.798 iops : min= 8226, max=17892, avg=16416.70, stdev=2686.72, samples=20 00:19:07.798 write: IOPS=14.4k, BW=56.1MiB/s (58.8MB/s)(673MiB/12001msec); 0 zone resets 00:19:07.798 slat (nsec): min=2153, max=60980, avg=5057.50, stdev=4359.46 00:19:07.798 clat (usec): min=420, max=2007.1k, avg=1095.44, stdev=19322.73 00:19:07.798 lat (usec): min=438, max=2007.1k, avg=1100.49, stdev=19322.71 00:19:07.798 clat percentiles (usec): 00:19:07.798 | 1.00th=[ 603], 5.00th=[ 693], 10.00th=[ 734], 20.00th=[ 791], 00:19:07.798 | 30.00th=[ 832], 40.00th=[ 857], 50.00th=[ 881], 60.00th=[ 914], 00:19:07.798 | 70.00th=[ 963], 80.00th=[ 1037], 90.00th=[ 1139], 95.00th=[ 1205], 00:19:07.798 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1352], 99.95th=[ 1369], 00:19:07.798 | 99.99th=[ 1434] 00:19:07.798 bw ( KiB/s): min=32752, max=70224, per=100.00%, avg=65637.75, stdev=10647.50, samples=20 00:19:07.798 iops : min= 8188, max=17556, avg=16409.40, stdev=2661.86, samples=20 00:19:07.798 lat (usec) : 500=0.06%, 750=9.95%, 1000=65.60% 00:19:07.798 lat (msec) : 2=24.38%, >=2000=0.01% 00:19:07.798 cpu : usr=7.38%, sys=13.99%, ctx=25374, majf=0, minf=1 00:19:07.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 00:19:07.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:07.798 issued rwts: total=172580,172410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.798 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:07.798 00:19:07.798 Run status group 0 (all jobs): 00:19:07.798 READ: bw=56.2MiB/s (58.9MB/s), 56.2MiB/s-56.2MiB/s (58.9MB/s-58.9MB/s), io=674MiB (707MB), run=12001-12001msec 00:19:07.798 WRITE: bw=56.1MiB/s (58.8MB/s), 56.1MiB/s-56.1MiB/s (58.8MB/s-58.8MB/s), io=673MiB (706MB), run=12001-12001msec 00:19:07.798 00:19:07.798 Disk stats (read/write): 00:19:07.798 sda: ios=170723/170520, merge=0/0, ticks=175767/177442, in_queue=353209, util=99.35% 00:19:07.798 05:10:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@87 -- # trap - SIGINT SIGTERM EXIT 00:19:07.798 05:10:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@89 -- # iscsicleanup 00:19:07.798 05:10:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:19:07.798 Cleaning up iSCSI connection 00:19:07.798 05:10:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:19:07.798 Logging out of session [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:19:07.798 Logout of [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:19:07.798 05:10:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:19:07.798 05:10:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@983 -- # rm -rf 00:19:07.798 05:10:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@91 -- # rpc_cmd -s /var/tmp/spdk1.sock spdk_kill_instance SIGTERM 00:19:07.798 05:10:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.798 05:10:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:07.798 05:10:06 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.798 05:10:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@92 -- # wait 88350 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@93 -- # iscsitestfini 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:19:07.798 00:19:07.798 real 0m17.635s 00:19:07.798 user 0m22.779s 00:19:07.798 sys 0m4.242s 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:07.798 ************************************ 00:19:07.798 END TEST iscsi_tgt_ip_migration 00:19:07.798 ************************************ 00:19:07.798 05:10:07 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:19:07.798 05:10:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@40 -- # run_test iscsi_tgt_trace_record /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:19:07.798 05:10:07 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:07.798 05:10:07 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.798 05:10:07 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:19:07.798 ************************************ 00:19:07.798 START TEST iscsi_tgt_trace_record 00:19:07.798 ************************************ 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:19:07.798 * Looking for test storage... 00:19:07.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:07.798 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@11 -- # iscsitestinit 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@13 -- # TRACE_TMP_FOLDER=./tmp-trace 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@14 -- # TRACE_RECORD_OUTPUT=./tmp-trace/record.trace 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@15 -- # TRACE_RECORD_NOTICE_LOG=./tmp-trace/record.notice 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@16 -- # TRACE_TOOL_LOG=./tmp-trace/trace.log 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@22 -- # '[' -z 10.0.0.1 ']' 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@27 -- # '[' -z 10.0.0.2 ']' 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@32 -- # NUM_TRACE_ENTRIES=4096 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@33 -- # MALLOC_BDEV_SIZE=64 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@34 -- # MALLOC_BLOCK_SIZE=4096 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@36 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@37 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@39 -- # timing_enter start_iscsi_tgt 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:19:07.799 start iscsi_tgt with trace enabled 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@41 -- # echo 'start iscsi_tgt with trace enabled' 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@43 -- # iscsi_pid=88648 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@42 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xf --num-trace-entries 4096 --tpoint-group all 00:19:07.799 Process pid: 88648 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@44 -- # echo 'Process pid: 88648' 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@46 -- # trap 'killprocess $iscsi_pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@48 -- # waitforlisten 88648 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@829 -- # '[' -z 88648 ']' 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.799 05:10:07 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:19:07.799 [2024-07-23 05:10:07.412878] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:19:07.799 [2024-07-23 05:10:07.412966] [ DPDK EAL parameters: iscsi --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88648 ] 00:19:07.799 [2024-07-23 05:10:07.550177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:07.799 [2024-07-23 05:10:07.646967] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask all specified. 00:19:07.799 [2024-07-23 05:10:07.647028] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s iscsi -p 88648' to capture a snapshot of events at runtime. 00:19:07.799 [2024-07-23 05:10:07.647052] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.799 [2024-07-23 05:10:07.647060] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.799 [2024-07-23 05:10:07.647067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/iscsi_trace.pid88648 for offline analysis/debug. 00:19:07.799 [2024-07-23 05:10:07.647208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.799 [2024-07-23 05:10:07.647359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.799 [2024-07-23 05:10:07.647919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:07.799 [2024-07-23 05:10:07.647968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@862 -- # return 0 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@50 -- # echo 'iscsi_tgt is listening. Running tests...' 00:19:08.365 iscsi_tgt is listening. Running tests... 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@52 -- # timing_exit start_iscsi_tgt 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@54 -- # mkdir -p ./tmp-trace 00:19:08.365 Trace record pid: 88683 00:19:08.365 Create bdevs and target nodes 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@56 -- # record_pid=88683 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@57 -- # echo 'Trace record pid: 88683' 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@59 -- # RPCS= 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@60 -- # RPCS+='iscsi_create_portal_group 1 10.0.0.1:3260\n' 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace_record -s iscsi -p 88648 -f ./tmp-trace/record.trace -q 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@61 -- # RPCS+='iscsi_create_initiator_group 2 ANY 10.0.0.2/32\n' 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@63 -- # echo 'Create bdevs and target nodes' 00:19:08.365 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@64 -- # CONNECTION_NUMBER=15 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # seq 0 15 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc0\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target0 Target0_alias Malloc0:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc1\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target1 Target1_alias Malloc1:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc2\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target2 Target2_alias Malloc2:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc3\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target3 Target3_alias Malloc3:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc4\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target4 Target4_alias Malloc4:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc5\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target5 Target5_alias Malloc5:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc6\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target6 Target6_alias Malloc6:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc7\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target7 Target7_alias Malloc7:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc8\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target8 Target8_alias Malloc8:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc9\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target9 Target9_alias Malloc9:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc10\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target10 Target10_alias Malloc10:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc11\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target11 Target11_alias Malloc11:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc12\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target12 Target12_alias Malloc12:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc13\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target13 Target13_alias Malloc13:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc14\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target14 Target14_alias Malloc14:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc15\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target15 Target15_alias Malloc15:0 1:2 256 -d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # echo -e iscsi_create_portal_group 1 '10.0.0.1:3260\niscsi_create_initiator_group' 2 ANY '10.0.0.2/32\nbdev_malloc_create' 64 4096 -b 'Malloc0\niscsi_create_target_node' Target0 Target0_alias Malloc0:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc1\niscsi_create_target_node' Target1 Target1_alias Malloc1:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc2\niscsi_create_target_node' Target2 Target2_alias Malloc2:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc3\niscsi_create_target_node' Target3 Target3_alias Malloc3:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc4\niscsi_create_target_node' Target4 Target4_alias Malloc4:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc5\niscsi_create_target_node' Target5 Target5_alias Malloc5:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc6\niscsi_create_target_node' Target6 Target6_alias Malloc6:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc7\niscsi_create_target_node' Target7 Target7_alias Malloc7:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc8\niscsi_create_target_node' Target8 Target8_alias Malloc8:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc9\niscsi_create_target_node' Target9 Target9_alias Malloc9:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc10\niscsi_create_target_node' Target10 Target10_alias Malloc10:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc11\niscsi_create_target_node' Target11 Target11_alias Malloc11:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc12\niscsi_create_target_node' Target12 Target12_alias Malloc12:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc13\niscsi_create_target_node' Target13 Target13_alias Malloc13:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc14\niscsi_create_target_node' Target14 Target14_alias Malloc14:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc15\niscsi_create_target_node' Target15 Target15_alias Malloc15:0 1:2 256 '-d\n' 00:19:08.366 05:10:08 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:09.329 Malloc0 00:19:09.329 Malloc1 00:19:09.329 Malloc2 00:19:09.329 Malloc3 00:19:09.329 Malloc4 00:19:09.329 Malloc5 00:19:09.329 Malloc6 00:19:09.329 Malloc7 00:19:09.329 Malloc8 00:19:09.329 Malloc9 00:19:09.329 Malloc10 00:19:09.329 Malloc11 00:19:09.329 Malloc12 00:19:09.329 Malloc13 00:19:09.329 Malloc14 00:19:09.329 Malloc15 00:19:09.329 05:10:09 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@71 -- # sleep 1 00:19:10.310 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@73 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target0 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:19:10.310 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:19:10.310 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@74 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:19:10.310 [2024-07-23 05:10:10.316723] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.310 [2024-07-23 05:10:10.334354] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.310 [2024-07-23 05:10:10.376001] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.310 [2024-07-23 05:10:10.379236] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.310 [2024-07-23 05:10:10.393839] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.310 [2024-07-23 05:10:10.442613] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.310 [2024-07-23 05:10:10.480439] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.310 [2024-07-23 05:10:10.490820] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.310 [2024-07-23 05:10:10.515185] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.568 [2024-07-23 05:10:10.548151] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.568 [2024-07-23 05:10:10.595392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.568 [2024-07-23 05:10:10.603974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.568 [2024-07-23 05:10:10.629844] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.568 [2024-07-23 05:10:10.652828] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.568 [2024-07-23 05:10:10.703391] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:19:10.568 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:19:10.568 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:19:10.568 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@75 -- # waitforiscsidevices 16 00:19:10.569 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@116 -- # local num=16 00:19:10.569 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:19:10.569 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:19:10.569 [2024-07-23 05:10:10.708225] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.569 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:19:10.569 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:19:10.569 Running FIO 00:19:10.569 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # n=16 00:19:10.569 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@120 -- # '[' 16 -ne 16 ']' 00:19:10.569 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@123 -- # return 0 00:19:10.569 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@77 -- # trap 'iscsicleanup; killprocess $iscsi_pid; killprocess $record_pid; delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:10.569 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@79 -- # echo 'Running FIO' 00:19:10.569 05:10:10 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 00:19:10.826 [global] 00:19:10.826 thread=1 00:19:10.826 invalidate=1 00:19:10.826 rw=randrw 00:19:10.826 time_based=1 00:19:10.826 runtime=1 00:19:10.826 ioengine=libaio 00:19:10.826 direct=1 00:19:10.826 bs=131072 00:19:10.826 iodepth=32 00:19:10.826 norandommap=1 00:19:10.826 numjobs=1 00:19:10.826 00:19:10.826 [job0] 00:19:10.826 filename=/dev/sda 00:19:10.826 [job1] 00:19:10.826 filename=/dev/sdb 00:19:10.826 [job2] 00:19:10.826 filename=/dev/sdd 00:19:10.826 [job3] 00:19:10.826 filename=/dev/sdc 00:19:10.826 [job4] 00:19:10.826 filename=/dev/sde 00:19:10.826 [job5] 00:19:10.826 filename=/dev/sdf 00:19:10.826 [job6] 00:19:10.826 filename=/dev/sdg 00:19:10.826 [job7] 00:19:10.826 filename=/dev/sdh 00:19:10.826 [job8] 00:19:10.826 filename=/dev/sdi 00:19:10.826 [job9] 00:19:10.826 filename=/dev/sdj 00:19:10.826 [job10] 00:19:10.826 filename=/dev/sdk 00:19:10.826 [job11] 00:19:10.826 filename=/dev/sdl 00:19:10.826 [job12] 00:19:10.826 filename=/dev/sdm 00:19:10.826 [job13] 00:19:10.826 filename=/dev/sdn 00:19:10.826 [job14] 00:19:10.826 filename=/dev/sdp 00:19:10.826 [job15] 00:19:10.826 filename=/dev/sdo 00:19:10.826 queue_depth set to 113 (sda) 00:19:10.826 queue_depth set to 113 (sdb) 00:19:11.083 queue_depth set to 113 (sdd) 00:19:11.083 queue_depth set to 113 (sdc) 00:19:11.083 queue_depth set to 113 (sde) 00:19:11.083 queue_depth set to 113 (sdf) 00:19:11.083 queue_depth set to 113 (sdg) 00:19:11.083 queue_depth set to 113 (sdh) 00:19:11.083 queue_depth set to 113 (sdi) 00:19:11.083 queue_depth set to 113 (sdj) 00:19:11.083 queue_depth set to 113 (sdk) 00:19:11.083 queue_depth set to 113 (sdl) 00:19:11.083 queue_depth set to 113 (sdm) 00:19:11.341 queue_depth set to 113 (sdn) 00:19:11.341 queue_depth set to 113 (sdp) 00:19:11.341 queue_depth set to 113 (sdo) 00:19:11.341 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:11.341 fio-3.35 00:19:11.341 Starting 16 threads 00:19:11.341 [2024-07-23 05:10:11.488859] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.492275] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.495987] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.498503] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.500717] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.502733] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.505059] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.508261] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.510268] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.512365] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.514538] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.517307] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.519523] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.521619] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.523676] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:11.341 [2024-07-23 05:10:11.527730] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.871777] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.881685] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.885270] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.887572] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.889749] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.893144] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.898782] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.901689] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.904944] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.908434] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.910959] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.913655] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 [2024-07-23 05:10:12.916217] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.724 00:19:12.724 job0: (groupid=0, jobs=1): err= 0: pid=89051: Tue Jul 23 05:10:12 2024 00:19:12.724 read: IOPS=512, BW=64.1MiB/s (67.2MB/s)(66.6MiB/1040msec) 00:19:12.724 slat (usec): min=6, max=2599, avg=32.48, stdev=140.89 00:19:12.724 clat (usec): min=1619, max=44821, avg=7667.86, stdev=2874.52 00:19:12.724 lat (usec): min=1637, max=44837, avg=7700.34, stdev=2869.17 00:19:12.724 clat percentiles (usec): 00:19:12.724 | 1.00th=[ 4146], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 6980], 00:19:12.724 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7635], 00:19:12.724 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8160], 95.00th=[ 8848], 00:19:12.724 | 99.00th=[14746], 99.50th=[39584], 99.90th=[44827], 99.95th=[44827], 00:19:12.724 | 99.99th=[44827] 00:19:12.724 bw ( KiB/s): min=65024, max=70514, per=6.19%, avg=67769.00, stdev=3882.02, samples=2 00:19:12.724 iops : min= 508, max= 550, avg=529.00, stdev=29.70, samples=2 00:19:12.724 write: IOPS=551, BW=69.0MiB/s (72.3MB/s)(71.8MiB/1040msec); 0 zone resets 00:19:12.724 slat (usec): min=8, max=3716, avg=39.93, stdev=168.71 00:19:12.724 clat (usec): min=483, max=89865, avg=50475.54, stdev=10126.63 00:19:12.724 lat (usec): min=1669, max=89878, avg=50515.47, stdev=10094.97 00:19:12.724 clat percentiles (usec): 00:19:12.724 | 1.00th=[ 4228], 5.00th=[40109], 10.00th=[46400], 20.00th=[48497], 00:19:12.724 | 30.00th=[50070], 40.00th=[51119], 50.00th=[51643], 60.00th=[52691], 00:19:12.724 | 70.00th=[53216], 80.00th=[54264], 90.00th=[55313], 95.00th=[57410], 00:19:12.724 | 99.00th=[80217], 99.50th=[86508], 99.90th=[89654], 99.95th=[89654], 00:19:12.724 | 99.99th=[89654] 00:19:12.724 bw ( KiB/s): min=69493, max=70144, per=6.23%, avg=69818.50, stdev=460.33, samples=2 00:19:12.724 iops : min= 542, max= 548, avg=545.00, stdev= 4.24, samples=2 00:19:12.724 lat (usec) : 500=0.09% 00:19:12.724 lat (msec) : 2=0.27%, 4=0.27%, 10=47.97%, 20=1.17%, 50=13.82% 00:19:12.724 lat (msec) : 100=36.40% 00:19:12.724 cpu : usr=0.19%, sys=2.12%, ctx=1073, majf=0, minf=1 00:19:12.724 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.2%, >=64=0.0% 00:19:12.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.724 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.724 issued rwts: total=533,574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.724 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.724 job1: (groupid=0, jobs=1): err= 0: pid=89052: Tue Jul 23 05:10:12 2024 00:19:12.724 read: IOPS=511, BW=64.0MiB/s (67.1MB/s)(65.8MiB/1028msec) 00:19:12.724 slat (usec): min=6, max=810, avg=22.23, stdev=53.90 00:19:12.724 clat (usec): min=4113, max=33198, avg=7654.79, stdev=2239.84 00:19:12.724 lat (usec): min=4604, max=33214, avg=7677.02, stdev=2235.30 00:19:12.724 clat percentiles (usec): 00:19:12.724 | 1.00th=[ 5342], 5.00th=[ 6456], 10.00th=[ 6521], 20.00th=[ 6783], 00:19:12.724 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:19:12.724 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8586], 95.00th=[ 9896], 00:19:12.724 | 99.00th=[13829], 99.50th=[29754], 99.90th=[33162], 99.95th=[33162], 00:19:12.724 | 99.99th=[33162] 00:19:12.724 bw ( KiB/s): min=66560, max=67328, per=6.11%, avg=66944.00, stdev=543.06, samples=2 00:19:12.724 iops : min= 520, max= 526, avg=523.00, stdev= 4.24, samples=2 00:19:12.724 write: IOPS=561, BW=70.2MiB/s (73.6MB/s)(72.1MiB/1028msec); 0 zone resets 00:19:12.724 slat (usec): min=7, max=926, avg=23.78, stdev=54.26 00:19:12.724 clat (usec): min=16208, max=71814, avg=49907.13, stdev=5633.80 00:19:12.724 lat (usec): min=16221, max=71845, avg=49930.91, stdev=5631.42 00:19:12.724 clat percentiles (usec): 00:19:12.724 | 1.00th=[27919], 5.00th=[42730], 10.00th=[44827], 20.00th=[46400], 00:19:12.724 | 30.00th=[47973], 40.00th=[49546], 50.00th=[50594], 60.00th=[51643], 00:19:12.724 | 70.00th=[52691], 80.00th=[53216], 90.00th=[55313], 95.00th=[56886], 00:19:12.724 | 99.00th=[63701], 99.50th=[67634], 99.90th=[71828], 99.95th=[71828], 00:19:12.724 | 99.99th=[71828] 00:19:12.724 bw ( KiB/s): min=67072, max=73216, per=6.25%, avg=70144.00, stdev=4344.46, samples=2 00:19:12.724 iops : min= 524, max= 572, avg=548.00, stdev=33.94, samples=2 00:19:12.724 lat (msec) : 10=45.33%, 20=2.27%, 50=23.03%, 100=29.37% 00:19:12.724 cpu : usr=0.49%, sys=1.75%, ctx=1049, majf=0, minf=1 00:19:12.724 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=97.2%, >=64=0.0% 00:19:12.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.724 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.724 issued rwts: total=526,577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.724 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.724 job2: (groupid=0, jobs=1): err= 0: pid=89060: Tue Jul 23 05:10:12 2024 00:19:12.724 read: IOPS=554, BW=69.3MiB/s (72.6MB/s)(71.8MiB/1036msec) 00:19:12.724 slat (usec): min=6, max=865, avg=26.16, stdev=62.62 00:19:12.724 clat (usec): min=1360, max=41796, avg=7915.59, stdev=3471.11 00:19:12.724 lat (usec): min=2008, max=41822, avg=7941.75, stdev=3467.76 00:19:12.724 clat percentiles (usec): 00:19:12.724 | 1.00th=[ 5407], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6915], 00:19:12.724 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:19:12.724 | 70.00th=[ 7701], 80.00th=[ 8029], 90.00th=[ 8586], 95.00th=[ 9896], 00:19:12.724 | 99.00th=[35914], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:19:12.724 | 99.99th=[41681] 00:19:12.724 bw ( KiB/s): min=71424, max=73984, per=6.64%, avg=72704.00, stdev=1810.19, samples=2 00:19:12.724 iops : min= 558, max= 578, avg=568.00, stdev=14.14, samples=2 00:19:12.724 write: IOPS=543, BW=67.9MiB/s (71.2MB/s)(70.4MiB/1036msec); 0 zone resets 00:19:12.724 slat (usec): min=9, max=6021, avg=57.78, stdev=338.70 00:19:12.724 clat (usec): min=6313, max=81758, avg=50044.56, stdev=7299.01 00:19:12.724 lat (usec): min=8090, max=81784, avg=50102.34, stdev=7188.47 00:19:12.724 clat percentiles (usec): 00:19:12.724 | 1.00th=[16188], 5.00th=[42206], 10.00th=[44303], 20.00th=[46924], 00:19:12.724 | 30.00th=[48497], 40.00th=[49546], 50.00th=[50594], 60.00th=[51643], 00:19:12.724 | 70.00th=[52691], 80.00th=[53740], 90.00th=[55837], 95.00th=[57934], 00:19:12.724 | 99.00th=[74974], 99.50th=[78119], 99.90th=[81265], 99.95th=[81265], 00:19:12.724 | 99.99th=[81265] 00:19:12.724 bw ( KiB/s): min=66560, max=71168, per=6.14%, avg=68864.00, stdev=3258.35, samples=2 00:19:12.724 iops : min= 520, max= 556, avg=538.00, stdev=25.46, samples=2 00:19:12.724 lat (msec) : 2=0.09%, 10=48.20%, 20=2.29%, 50=21.99%, 100=27.44% 00:19:12.724 cpu : usr=0.68%, sys=1.84%, ctx=1089, majf=0, minf=1 00:19:12.724 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.3%, >=64=0.0% 00:19:12.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.724 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.724 issued rwts: total=574,563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.724 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.724 job3: (groupid=0, jobs=1): err= 0: pid=89094: Tue Jul 23 05:10:12 2024 00:19:12.724 read: IOPS=520, BW=65.0MiB/s (68.2MB/s)(67.5MiB/1038msec) 00:19:12.724 slat (usec): min=7, max=566, avg=22.82, stdev=44.80 00:19:12.724 clat (usec): min=710, max=44546, avg=7710.17, stdev=3557.27 00:19:12.724 lat (usec): min=720, max=44557, avg=7732.99, stdev=3557.56 00:19:12.724 clat percentiles (usec): 00:19:12.725 | 1.00th=[ 1401], 5.00th=[ 4015], 10.00th=[ 6652], 20.00th=[ 7046], 00:19:12.725 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7767], 00:19:12.725 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8291], 95.00th=[ 8717], 00:19:12.725 | 99.00th=[14091], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:19:12.725 | 99.99th=[44303] 00:19:12.725 bw ( KiB/s): min=68096, max=68864, per=6.25%, avg=68480.00, stdev=543.06, samples=2 00:19:12.725 iops : min= 532, max= 538, avg=535.00, stdev= 4.24, samples=2 00:19:12.725 write: IOPS=544, BW=68.0MiB/s (71.3MB/s)(70.6MiB/1038msec); 0 zone resets 00:19:12.725 slat (usec): min=8, max=1281, avg=29.15, stdev=78.29 00:19:12.725 clat (usec): min=1709, max=89045, avg=51274.08, stdev=9369.62 00:19:12.725 lat (usec): min=1741, max=89058, avg=51303.23, stdev=9372.38 00:19:12.725 clat percentiles (usec): 00:19:12.725 | 1.00th=[ 5669], 5.00th=[40633], 10.00th=[47449], 20.00th=[49546], 00:19:12.725 | 30.00th=[50070], 40.00th=[51119], 50.00th=[52167], 60.00th=[53216], 00:19:12.725 | 70.00th=[54264], 80.00th=[55313], 90.00th=[56361], 95.00th=[58983], 00:19:12.725 | 99.00th=[79168], 99.50th=[84411], 99.90th=[88605], 99.95th=[88605], 00:19:12.725 | 99.99th=[88605] 00:19:12.725 bw ( KiB/s): min=68608, max=69120, per=6.14%, avg=68864.00, stdev=362.04, samples=2 00:19:12.725 iops : min= 536, max= 540, avg=538.00, stdev= 2.83, samples=2 00:19:12.725 lat (usec) : 750=0.09%, 1000=0.09% 00:19:12.725 lat (msec) : 2=0.45%, 4=1.99%, 10=46.52%, 20=0.45%, 50=12.58% 00:19:12.725 lat (msec) : 100=37.83% 00:19:12.725 cpu : usr=0.48%, sys=1.83%, ctx=1069, majf=0, minf=1 00:19:12.725 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.2%, >=64=0.0% 00:19:12.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.725 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.725 issued rwts: total=540,565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.725 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.725 job4: (groupid=0, jobs=1): err= 0: pid=89121: Tue Jul 23 05:10:12 2024 00:19:12.725 read: IOPS=542, BW=67.9MiB/s (71.2MB/s)(70.2MiB/1035msec) 00:19:12.725 slat (usec): min=6, max=839, avg=24.94, stdev=61.88 00:19:12.725 clat (usec): min=2421, max=39658, avg=7746.22, stdev=2750.38 00:19:12.725 lat (usec): min=2433, max=39681, avg=7771.16, stdev=2748.30 00:19:12.725 clat percentiles (usec): 00:19:12.725 | 1.00th=[ 4359], 5.00th=[ 6390], 10.00th=[ 6718], 20.00th=[ 6980], 00:19:12.725 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:19:12.725 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8455], 95.00th=[ 9372], 00:19:12.725 | 99.00th=[15533], 99.50th=[34341], 99.90th=[39584], 99.95th=[39584], 00:19:12.725 | 99.99th=[39584] 00:19:12.725 bw ( KiB/s): min=71168, max=71680, per=6.52%, avg=71424.00, stdev=362.04, samples=2 00:19:12.725 iops : min= 556, max= 560, avg=558.00, stdev= 2.83, samples=2 00:19:12.725 write: IOPS=544, BW=68.1MiB/s (71.4MB/s)(70.5MiB/1035msec); 0 zone resets 00:19:12.725 slat (usec): min=8, max=1372, avg=26.28, stdev=65.37 00:19:12.725 clat (usec): min=6207, max=78469, avg=50821.32, stdev=6640.98 00:19:12.725 lat (usec): min=6239, max=78492, avg=50847.61, stdev=6641.42 00:19:12.725 clat percentiles (usec): 00:19:12.725 | 1.00th=[20579], 5.00th=[44303], 10.00th=[46400], 20.00th=[48497], 00:19:12.725 | 30.00th=[49546], 40.00th=[50594], 50.00th=[51119], 60.00th=[52167], 00:19:12.725 | 70.00th=[53216], 80.00th=[54264], 90.00th=[55837], 95.00th=[56886], 00:19:12.725 | 99.00th=[71828], 99.50th=[74974], 99.90th=[78119], 99.95th=[78119], 00:19:12.725 | 99.99th=[78119] 00:19:12.725 bw ( KiB/s): min=67072, max=70144, per=6.12%, avg=68608.00, stdev=2172.23, samples=2 00:19:12.725 iops : min= 524, max= 548, avg=536.00, stdev=16.97, samples=2 00:19:12.725 lat (msec) : 4=0.44%, 10=47.96%, 20=1.60%, 50=18.29%, 100=31.71% 00:19:12.725 cpu : usr=0.68%, sys=1.93%, ctx=1036, majf=0, minf=1 00:19:12.725 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.2%, >=64=0.0% 00:19:12.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.725 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.725 issued rwts: total=562,564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.725 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.725 job5: (groupid=0, jobs=1): err= 0: pid=89126: Tue Jul 23 05:10:12 2024 00:19:12.725 read: IOPS=594, BW=74.3MiB/s (77.9MB/s)(77.4MiB/1041msec) 00:19:12.725 slat (usec): min=6, max=469, avg=21.88, stdev=36.67 00:19:12.725 clat (usec): min=1879, max=46070, avg=7611.88, stdev=3329.97 00:19:12.725 lat (usec): min=1898, max=46081, avg=7633.76, stdev=3327.88 00:19:12.725 clat percentiles (usec): 00:19:12.725 | 1.00th=[ 2638], 5.00th=[ 3687], 10.00th=[ 6390], 20.00th=[ 6652], 00:19:12.725 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:19:12.725 | 70.00th=[ 7570], 80.00th=[ 7832], 90.00th=[ 8848], 95.00th=[10028], 00:19:12.725 | 99.00th=[19006], 99.50th=[20055], 99.90th=[45876], 99.95th=[45876], 00:19:12.725 | 99.99th=[45876] 00:19:12.725 bw ( KiB/s): min=68352, max=89344, per=7.20%, avg=78848.00, stdev=14843.59, samples=2 00:19:12.725 iops : min= 534, max= 698, avg=616.00, stdev=115.97, samples=2 00:19:12.725 write: IOPS=593, BW=74.2MiB/s (77.8MB/s)(77.2MiB/1041msec); 0 zone resets 00:19:12.725 slat (usec): min=8, max=664, avg=29.84, stdev=55.24 00:19:12.725 clat (usec): min=1233, max=93221, avg=46092.31, stdev=12999.02 00:19:12.725 lat (usec): min=1266, max=93241, avg=46122.14, stdev=12998.96 00:19:12.725 clat percentiles (usec): 00:19:12.725 | 1.00th=[ 4424], 5.00th=[10159], 10.00th=[31589], 20.00th=[45876], 00:19:12.725 | 30.00th=[47449], 40.00th=[48497], 50.00th=[49021], 60.00th=[50070], 00:19:12.725 | 70.00th=[50594], 80.00th=[51643], 90.00th=[53216], 95.00th=[54789], 00:19:12.725 | 99.00th=[73925], 99.50th=[78119], 99.90th=[92799], 99.95th=[92799], 00:19:12.725 | 99.99th=[92799] 00:19:12.725 bw ( KiB/s): min=72960, max=78080, per=6.73%, avg=75520.00, stdev=3620.39, samples=2 00:19:12.725 iops : min= 570, max= 610, avg=590.00, stdev=28.28, samples=2 00:19:12.725 lat (msec) : 2=0.32%, 4=2.59%, 10=47.13%, 20=4.20%, 50=27.57% 00:19:12.725 lat (msec) : 100=18.19% 00:19:12.725 cpu : usr=0.67%, sys=2.02%, ctx=1172, majf=0, minf=1 00:19:12.725 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=97.5%, >=64=0.0% 00:19:12.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.725 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.725 issued rwts: total=619,618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.725 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.725 job6: (groupid=0, jobs=1): err= 0: pid=89157: Tue Jul 23 05:10:12 2024 00:19:12.725 read: IOPS=543, BW=68.0MiB/s (71.3MB/s)(69.9MiB/1028msec) 00:19:12.725 slat (usec): min=6, max=768, avg=25.63, stdev=60.01 00:19:12.725 clat (usec): min=5065, max=31949, avg=7653.23, stdev=1888.69 00:19:12.725 lat (usec): min=5297, max=31959, avg=7678.86, stdev=1886.09 00:19:12.725 clat percentiles (usec): 00:19:12.725 | 1.00th=[ 6128], 5.00th=[ 6521], 10.00th=[ 6652], 20.00th=[ 6980], 00:19:12.725 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7570], 00:19:12.725 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8455], 95.00th=[ 9241], 00:19:12.725 | 99.00th=[11600], 99.50th=[28181], 99.90th=[31851], 99.95th=[31851], 00:19:12.725 | 99.99th=[31851] 00:19:12.725 bw ( KiB/s): min=68864, max=73472, per=6.50%, avg=71168.00, stdev=3258.35, samples=2 00:19:12.725 iops : min= 538, max= 574, avg=556.00, stdev=25.46, samples=2 00:19:12.725 write: IOPS=556, BW=69.6MiB/s (72.9MB/s)(71.5MiB/1028msec); 0 zone resets 00:19:12.725 slat (usec): min=7, max=789, avg=28.19, stdev=55.04 00:19:12.725 clat (usec): min=11466, max=65729, avg=49863.49, stdev=6083.60 00:19:12.725 lat (usec): min=11496, max=65914, avg=49891.68, stdev=6087.77 00:19:12.725 clat percentiles (usec): 00:19:12.725 | 1.00th=[22938], 5.00th=[40633], 10.00th=[44303], 20.00th=[47449], 00:19:12.725 | 30.00th=[48497], 40.00th=[50070], 50.00th=[50594], 60.00th=[51643], 00:19:12.725 | 70.00th=[52167], 80.00th=[53740], 90.00th=[55313], 95.00th=[57410], 00:19:12.725 | 99.00th=[61604], 99.50th=[62653], 99.90th=[65799], 99.95th=[65799], 00:19:12.725 | 99.99th=[65799] 00:19:12.725 bw ( KiB/s): min=67584, max=71680, per=6.21%, avg=69632.00, stdev=2896.31, samples=2 00:19:12.725 iops : min= 528, max= 560, avg=544.00, stdev=22.63, samples=2 00:19:12.725 lat (msec) : 10=47.75%, 20=1.77%, 50=20.95%, 100=29.53% 00:19:12.725 cpu : usr=0.97%, sys=1.46%, ctx=1070, majf=0, minf=1 00:19:12.725 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.3%, >=64=0.0% 00:19:12.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.725 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.725 issued rwts: total=559,572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.725 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.725 job7: (groupid=0, jobs=1): err= 0: pid=89161: Tue Jul 23 05:10:12 2024 00:19:12.725 read: IOPS=481, BW=60.2MiB/s (63.1MB/s)(62.2MiB/1034msec) 00:19:12.725 slat (usec): min=7, max=753, avg=19.63, stdev=37.69 00:19:12.725 clat (usec): min=1134, max=39319, avg=7865.25, stdev=2675.52 00:19:12.725 lat (usec): min=1143, max=39341, avg=7884.88, stdev=2674.82 00:19:12.725 clat percentiles (usec): 00:19:12.725 | 1.00th=[ 5735], 5.00th=[ 6652], 10.00th=[ 6915], 20.00th=[ 7177], 00:19:12.725 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7767], 00:19:12.725 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 9503], 00:19:12.725 | 99.00th=[12256], 99.50th=[33817], 99.90th=[39060], 99.95th=[39060], 00:19:12.725 | 99.99th=[39060] 00:19:12.725 bw ( KiB/s): min=58624, max=67975, per=5.78%, avg=63299.50, stdev=6612.16, samples=2 00:19:12.725 iops : min= 458, max= 531, avg=494.50, stdev=51.62, samples=2 00:19:12.725 write: IOPS=538, BW=67.3MiB/s (70.6MB/s)(69.6MiB/1034msec); 0 zone resets 00:19:12.725 slat (usec): min=8, max=978, avg=22.99, stdev=44.39 00:19:12.725 clat (usec): min=11887, max=87890, avg=52208.29, stdev=7197.24 00:19:12.725 lat (usec): min=11907, max=87913, avg=52231.27, stdev=7196.98 00:19:12.725 clat percentiles (usec): 00:19:12.725 | 1.00th=[21890], 5.00th=[44303], 10.00th=[46924], 20.00th=[49021], 00:19:12.725 | 30.00th=[50594], 40.00th=[51643], 50.00th=[52691], 60.00th=[53216], 00:19:12.726 | 70.00th=[54264], 80.00th=[55313], 90.00th=[57410], 95.00th=[58983], 00:19:12.726 | 99.00th=[82314], 99.50th=[84411], 99.90th=[87557], 99.95th=[87557], 00:19:12.726 | 99.99th=[87557] 00:19:12.726 bw ( KiB/s): min=66436, max=69120, per=6.04%, avg=67778.00, stdev=1897.87, samples=2 00:19:12.726 iops : min= 519, max= 540, avg=529.50, stdev=14.85, samples=2 00:19:12.726 lat (msec) : 2=0.19%, 4=0.19%, 10=44.93%, 20=1.99%, 50=13.27% 00:19:12.726 lat (msec) : 100=39.43% 00:19:12.726 cpu : usr=0.77%, sys=1.55%, ctx=954, majf=0, minf=1 00:19:12.726 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=97.1%, >=64=0.0% 00:19:12.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.726 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.726 issued rwts: total=498,557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.726 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.726 job8: (groupid=0, jobs=1): err= 0: pid=89162: Tue Jul 23 05:10:12 2024 00:19:12.726 read: IOPS=500, BW=62.5MiB/s (65.5MB/s)(64.6MiB/1034msec) 00:19:12.726 slat (usec): min=6, max=255, avg=18.10, stdev=23.64 00:19:12.726 clat (usec): min=2296, max=37898, avg=7820.53, stdev=2818.15 00:19:12.726 lat (usec): min=2306, max=37920, avg=7838.63, stdev=2819.04 00:19:12.726 clat percentiles (usec): 00:19:12.726 | 1.00th=[ 4424], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 6980], 00:19:12.726 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7635], 00:19:12.726 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8356], 95.00th=[10028], 00:19:12.726 | 99.00th=[15008], 99.50th=[34866], 99.90th=[38011], 99.95th=[38011], 00:19:12.726 | 99.99th=[38011] 00:19:12.726 bw ( KiB/s): min=65536, max=65792, per=6.00%, avg=65664.00, stdev=181.02, samples=2 00:19:12.726 iops : min= 512, max= 514, avg=513.00, stdev= 1.41, samples=2 00:19:12.726 write: IOPS=546, BW=68.3MiB/s (71.6MB/s)(70.6MiB/1034msec); 0 zone resets 00:19:12.726 slat (usec): min=8, max=695, avg=25.82, stdev=42.97 00:19:12.726 clat (usec): min=7394, max=78201, avg=51244.94, stdev=6867.87 00:19:12.726 lat (usec): min=7413, max=78217, avg=51270.77, stdev=6870.72 00:19:12.726 clat percentiles (usec): 00:19:12.726 | 1.00th=[15270], 5.00th=[43254], 10.00th=[46400], 20.00th=[49021], 00:19:12.726 | 30.00th=[50070], 40.00th=[51119], 50.00th=[51643], 60.00th=[52691], 00:19:12.726 | 70.00th=[53740], 80.00th=[54789], 90.00th=[56361], 95.00th=[57934], 00:19:12.726 | 99.00th=[69731], 99.50th=[76022], 99.90th=[78119], 99.95th=[78119], 00:19:12.726 | 99.99th=[78119] 00:19:12.726 bw ( KiB/s): min=67072, max=70400, per=6.13%, avg=68736.00, stdev=2353.25, samples=2 00:19:12.726 iops : min= 524, max= 550, avg=537.00, stdev=18.38, samples=2 00:19:12.726 lat (msec) : 4=0.46%, 10=45.10%, 20=2.50%, 50=14.42%, 100=37.52% 00:19:12.726 cpu : usr=0.68%, sys=1.65%, ctx=1027, majf=0, minf=1 00:19:12.726 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=97.1%, >=64=0.0% 00:19:12.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.726 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.726 issued rwts: total=517,565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.726 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.726 job9: (groupid=0, jobs=1): err= 0: pid=89168: Tue Jul 23 05:10:12 2024 00:19:12.726 read: IOPS=575, BW=71.9MiB/s (75.4MB/s)(74.1MiB/1031msec) 00:19:12.726 slat (usec): min=6, max=510, avg=23.08, stdev=39.62 00:19:12.726 clat (usec): min=4904, max=34052, avg=7836.07, stdev=2549.90 00:19:12.726 lat (usec): min=4995, max=34067, avg=7859.15, stdev=2546.53 00:19:12.726 clat percentiles (usec): 00:19:12.726 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 6783], 00:19:12.726 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7504], 00:19:12.726 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8979], 95.00th=[11469], 00:19:12.726 | 99.00th=[16581], 99.50th=[32375], 99.90th=[33817], 99.95th=[33817], 00:19:12.726 | 99.99th=[33817] 00:19:12.726 bw ( KiB/s): min=72704, max=78236, per=6.89%, avg=75470.00, stdev=3911.71, samples=2 00:19:12.726 iops : min= 568, max= 611, avg=589.50, stdev=30.41, samples=2 00:19:12.726 write: IOPS=557, BW=69.7MiB/s (73.1MB/s)(71.9MiB/1031msec); 0 zone resets 00:19:12.726 slat (usec): min=8, max=437, avg=27.29, stdev=39.16 00:19:12.726 clat (usec): min=17069, max=71802, avg=49119.69, stdev=5013.48 00:19:12.726 lat (usec): min=17098, max=71826, avg=49146.98, stdev=5014.96 00:19:12.726 clat percentiles (usec): 00:19:12.726 | 1.00th=[30802], 5.00th=[42206], 10.00th=[44827], 20.00th=[46400], 00:19:12.726 | 30.00th=[47973], 40.00th=[48497], 50.00th=[49546], 60.00th=[50070], 00:19:12.726 | 70.00th=[51119], 80.00th=[52167], 90.00th=[53216], 95.00th=[55313], 00:19:12.726 | 99.00th=[64750], 99.50th=[66323], 99.90th=[71828], 99.95th=[71828], 00:19:12.726 | 99.99th=[71828] 00:19:12.726 bw ( KiB/s): min=66949, max=73216, per=6.25%, avg=70082.50, stdev=4431.44, samples=2 00:19:12.726 iops : min= 523, max= 572, avg=547.50, stdev=34.65, samples=2 00:19:12.726 lat (msec) : 10=47.35%, 20=3.25%, 50=27.91%, 100=21.49% 00:19:12.726 cpu : usr=0.58%, sys=1.94%, ctx=1078, majf=0, minf=1 00:19:12.726 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=97.3%, >=64=0.0% 00:19:12.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.726 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.726 issued rwts: total=593,575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.726 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.726 job10: (groupid=0, jobs=1): err= 0: pid=89170: Tue Jul 23 05:10:12 2024 00:19:12.726 read: IOPS=567, BW=70.9MiB/s (74.4MB/s)(73.1MiB/1031msec) 00:19:12.726 slat (usec): min=6, max=539, avg=21.88, stdev=39.83 00:19:12.726 clat (usec): min=2956, max=31266, avg=7600.48, stdev=1574.70 00:19:12.726 lat (usec): min=2967, max=31276, avg=7622.36, stdev=1575.42 00:19:12.726 clat percentiles (usec): 00:19:12.726 | 1.00th=[ 3884], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6849], 00:19:12.726 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7570], 00:19:12.726 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9503], 00:19:12.726 | 99.00th=[13304], 99.50th=[13304], 99.90th=[31327], 99.95th=[31327], 00:19:12.726 | 99.99th=[31327] 00:19:12.726 bw ( KiB/s): min=70656, max=78848, per=6.83%, avg=74752.00, stdev=5792.62, samples=2 00:19:12.726 iops : min= 552, max= 616, avg=584.00, stdev=45.25, samples=2 00:19:12.726 write: IOPS=557, BW=69.7MiB/s (73.1MB/s)(71.9MiB/1031msec); 0 zone resets 00:19:12.726 slat (usec): min=8, max=1278, avg=29.33, stdev=65.27 00:19:12.726 clat (usec): min=13872, max=74781, avg=49488.03, stdev=6504.48 00:19:12.726 lat (usec): min=13890, max=74822, avg=49517.36, stdev=6500.12 00:19:12.726 clat percentiles (usec): 00:19:12.726 | 1.00th=[18220], 5.00th=[39060], 10.00th=[44303], 20.00th=[46924], 00:19:12.726 | 30.00th=[48497], 40.00th=[49021], 50.00th=[50070], 60.00th=[50594], 00:19:12.726 | 70.00th=[52167], 80.00th=[53216], 90.00th=[55313], 95.00th=[56361], 00:19:12.726 | 99.00th=[66323], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:19:12.726 | 99.99th=[74974] 00:19:12.726 bw ( KiB/s): min=68096, max=71424, per=6.22%, avg=69760.00, stdev=2353.25, samples=2 00:19:12.726 iops : min= 532, max= 558, avg=545.00, stdev=18.38, samples=2 00:19:12.726 lat (msec) : 4=0.52%, 10=47.76%, 20=2.67%, 50=25.09%, 100=23.97% 00:19:12.726 cpu : usr=0.87%, sys=2.04%, ctx=1004, majf=0, minf=1 00:19:12.726 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=97.3%, >=64=0.0% 00:19:12.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.726 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.726 issued rwts: total=585,575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.726 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.726 job11: (groupid=0, jobs=1): err= 0: pid=89171: Tue Jul 23 05:10:12 2024 00:19:12.726 read: IOPS=545, BW=68.2MiB/s (71.5MB/s)(70.6MiB/1036msec) 00:19:12.726 slat (usec): min=7, max=419, avg=23.39, stdev=38.62 00:19:12.726 clat (usec): min=1640, max=41512, avg=7975.43, stdev=2778.18 00:19:12.726 lat (usec): min=1650, max=41539, avg=7998.83, stdev=2778.01 00:19:12.726 clat percentiles (usec): 00:19:12.726 | 1.00th=[ 3654], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 7177], 00:19:12.726 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:19:12.726 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8586], 95.00th=[ 9241], 00:19:12.726 | 99.00th=[13960], 99.50th=[35914], 99.90th=[41681], 99.95th=[41681], 00:19:12.726 | 99.99th=[41681] 00:19:12.726 bw ( KiB/s): min=71168, max=72593, per=6.56%, avg=71880.50, stdev=1007.63, samples=2 00:19:12.726 iops : min= 556, max= 567, avg=561.50, stdev= 7.78, samples=2 00:19:12.726 write: IOPS=537, BW=67.2MiB/s (70.5MB/s)(69.6MiB/1036msec); 0 zone resets 00:19:12.726 slat (usec): min=8, max=900, avg=30.30, stdev=67.39 00:19:12.726 clat (usec): min=10898, max=82273, avg=51261.17, stdev=6741.82 00:19:12.726 lat (usec): min=10926, max=82291, avg=51291.46, stdev=6744.43 00:19:12.726 clat percentiles (usec): 00:19:12.726 | 1.00th=[20579], 5.00th=[42730], 10.00th=[45876], 20.00th=[48497], 00:19:12.727 | 30.00th=[50070], 40.00th=[51119], 50.00th=[51643], 60.00th=[52691], 00:19:12.727 | 70.00th=[53740], 80.00th=[54789], 90.00th=[56361], 95.00th=[57934], 00:19:12.727 | 99.00th=[73925], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:19:12.727 | 99.99th=[82314] 00:19:12.727 bw ( KiB/s): min=66436, max=69120, per=6.04%, avg=67778.00, stdev=1897.87, samples=2 00:19:12.727 iops : min= 519, max= 540, avg=529.50, stdev=14.85, samples=2 00:19:12.727 lat (msec) : 2=0.45%, 4=0.27%, 10=47.95%, 20=1.78%, 50=15.24% 00:19:12.727 lat (msec) : 100=34.31% 00:19:12.727 cpu : usr=0.77%, sys=1.74%, ctx=1065, majf=0, minf=1 00:19:12.727 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.2%, >=64=0.0% 00:19:12.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.727 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.727 issued rwts: total=565,557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.727 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.727 job12: (groupid=0, jobs=1): err= 0: pid=89172: Tue Jul 23 05:10:12 2024 00:19:12.727 read: IOPS=569, BW=71.2MiB/s (74.7MB/s)(73.1MiB/1027msec) 00:19:12.727 slat (usec): min=6, max=566, avg=20.18, stdev=34.54 00:19:12.727 clat (usec): min=6098, max=27091, avg=7721.91, stdev=1303.33 00:19:12.727 lat (usec): min=6126, max=27101, avg=7742.09, stdev=1304.38 00:19:12.727 clat percentiles (usec): 00:19:12.727 | 1.00th=[ 6390], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:19:12.727 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:19:12.727 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8356], 95.00th=[ 9503], 00:19:12.727 | 99.00th=[12387], 99.50th=[14484], 99.90th=[27132], 99.95th=[27132], 00:19:12.727 | 99.99th=[27132] 00:19:12.727 bw ( KiB/s): min=65536, max=83968, per=6.83%, avg=74752.00, stdev=13033.39, samples=2 00:19:12.727 iops : min= 512, max= 656, avg=584.00, stdev=101.82, samples=2 00:19:12.727 write: IOPS=551, BW=68.9MiB/s (72.2MB/s)(70.8MiB/1027msec); 0 zone resets 00:19:12.727 slat (usec): min=7, max=647, avg=24.68, stdev=41.56 00:19:12.727 clat (usec): min=11295, max=72934, avg=49943.93, stdev=5460.67 00:19:12.727 lat (usec): min=11319, max=72948, avg=49968.61, stdev=5460.61 00:19:12.727 clat percentiles (usec): 00:19:12.727 | 1.00th=[26084], 5.00th=[44303], 10.00th=[46924], 20.00th=[47973], 00:19:12.727 | 30.00th=[49021], 40.00th=[49546], 50.00th=[50594], 60.00th=[51119], 00:19:12.727 | 70.00th=[51643], 80.00th=[52691], 90.00th=[54264], 95.00th=[55313], 00:19:12.727 | 99.00th=[60556], 99.50th=[66323], 99.90th=[72877], 99.95th=[72877], 00:19:12.727 | 99.99th=[72877] 00:19:12.727 bw ( KiB/s): min=66560, max=70400, per=6.11%, avg=68480.00, stdev=2715.29, samples=2 00:19:12.727 iops : min= 520, max= 550, avg=535.00, stdev=21.21, samples=2 00:19:12.727 lat (msec) : 10=48.57%, 20=2.52%, 50=20.94%, 100=27.98% 00:19:12.727 cpu : usr=0.68%, sys=1.95%, ctx=1049, majf=0, minf=1 00:19:12.727 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=97.3%, >=64=0.0% 00:19:12.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.727 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.727 issued rwts: total=585,566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.727 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.727 job13: (groupid=0, jobs=1): err= 0: pid=89173: Tue Jul 23 05:10:12 2024 00:19:12.727 read: IOPS=555, BW=69.5MiB/s (72.8MB/s)(71.6MiB/1031msec) 00:19:12.727 slat (usec): min=7, max=466, avg=21.70, stdev=39.46 00:19:12.727 clat (usec): min=5661, max=35461, avg=7793.24, stdev=2665.95 00:19:12.727 lat (usec): min=5709, max=35472, avg=7814.95, stdev=2663.47 00:19:12.727 clat percentiles (usec): 00:19:12.727 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6587], 20.00th=[ 6849], 00:19:12.727 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 00:19:12.727 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8455], 95.00th=[10814], 00:19:12.727 | 99.00th=[17171], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:19:12.727 | 99.99th=[35390] 00:19:12.727 bw ( KiB/s): min=68864, max=76800, per=6.65%, avg=72832.00, stdev=5611.60, samples=2 00:19:12.727 iops : min= 538, max= 600, avg=569.00, stdev=43.84, samples=2 00:19:12.727 write: IOPS=557, BW=69.7MiB/s (73.1MB/s)(71.9MiB/1031msec); 0 zone resets 00:19:12.727 slat (usec): min=7, max=453, avg=25.30, stdev=40.74 00:19:12.727 clat (usec): min=17534, max=80001, avg=49453.95, stdev=5519.81 00:19:12.727 lat (usec): min=17543, max=80026, avg=49479.24, stdev=5521.15 00:19:12.727 clat percentiles (usec): 00:19:12.727 | 1.00th=[30278], 5.00th=[42730], 10.00th=[44827], 20.00th=[46924], 00:19:12.727 | 30.00th=[47973], 40.00th=[48497], 50.00th=[49546], 60.00th=[50070], 00:19:12.727 | 70.00th=[51119], 80.00th=[52167], 90.00th=[53740], 95.00th=[57410], 00:19:12.727 | 99.00th=[68682], 99.50th=[78119], 99.90th=[80217], 99.95th=[80217], 00:19:12.727 | 99.99th=[80217] 00:19:12.727 bw ( KiB/s): min=66816, max=73216, per=6.24%, avg=70016.00, stdev=4525.48, samples=2 00:19:12.727 iops : min= 522, max= 572, avg=547.00, stdev=35.36, samples=2 00:19:12.727 lat (msec) : 10=46.60%, 20=3.14%, 50=28.92%, 100=21.34% 00:19:12.727 cpu : usr=0.29%, sys=1.94%, ctx=1088, majf=0, minf=1 00:19:12.727 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=97.3%, >=64=0.0% 00:19:12.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.727 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.727 issued rwts: total=573,575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.727 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.727 job14: (groupid=0, jobs=1): err= 0: pid=89174: Tue Jul 23 05:10:12 2024 00:19:12.727 read: IOPS=533, BW=66.7MiB/s (70.0MB/s)(69.0MiB/1034msec) 00:19:12.727 slat (usec): min=6, max=2020, avg=22.31, stdev=87.14 00:19:12.727 clat (usec): min=4128, max=40401, avg=7659.60, stdev=2483.23 00:19:12.727 lat (usec): min=5419, max=40420, avg=7681.91, stdev=2479.55 00:19:12.727 clat percentiles (usec): 00:19:12.727 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6587], 20.00th=[ 6915], 00:19:12.727 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:19:12.727 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8225], 95.00th=[ 9241], 00:19:12.727 | 99.00th=[15401], 99.50th=[34866], 99.90th=[40633], 99.95th=[40633], 00:19:12.727 | 99.99th=[40633] 00:19:12.727 bw ( KiB/s): min=69120, max=71424, per=6.42%, avg=70272.00, stdev=1629.17, samples=2 00:19:12.727 iops : min= 540, max= 558, avg=549.00, stdev=12.73, samples=2 00:19:12.727 write: IOPS=550, BW=68.8MiB/s (72.1MB/s)(71.1MiB/1034msec); 0 zone resets 00:19:12.727 slat (usec): min=8, max=390, avg=26.05, stdev=37.13 00:19:12.727 clat (usec): min=9373, max=80531, avg=50583.79, stdev=6596.04 00:19:12.727 lat (usec): min=9383, max=80541, avg=50609.84, stdev=6596.46 00:19:12.727 clat percentiles (usec): 00:19:12.727 | 1.00th=[22676], 5.00th=[44303], 10.00th=[45876], 20.00th=[47449], 00:19:12.727 | 30.00th=[48497], 40.00th=[49546], 50.00th=[50070], 60.00th=[51119], 00:19:12.727 | 70.00th=[52167], 80.00th=[54264], 90.00th=[56361], 95.00th=[58983], 00:19:12.727 | 99.00th=[73925], 99.50th=[79168], 99.90th=[80217], 99.95th=[80217], 00:19:12.727 | 99.99th=[80217] 00:19:12.727 bw ( KiB/s): min=67328, max=70912, per=6.16%, avg=69120.00, stdev=2534.27, samples=2 00:19:12.727 iops : min= 526, max= 554, avg=540.00, stdev=19.80, samples=2 00:19:12.727 lat (msec) : 10=47.55%, 20=1.87%, 50=23.37%, 100=27.21% 00:19:12.727 cpu : usr=0.97%, sys=1.55%, ctx=1043, majf=0, minf=1 00:19:12.727 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=97.2%, >=64=0.0% 00:19:12.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.727 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.727 issued rwts: total=552,569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.727 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.727 job15: (groupid=0, jobs=1): err= 0: pid=89175: Tue Jul 23 05:10:12 2024 00:19:12.727 read: IOPS=506, BW=63.3MiB/s (66.4MB/s)(65.5MiB/1034msec) 00:19:12.727 slat (usec): min=6, max=527, avg=23.27, stdev=43.01 00:19:12.727 clat (usec): min=5419, max=40485, avg=8226.77, stdev=3467.23 00:19:12.727 lat (usec): min=5436, max=40496, avg=8250.04, stdev=3465.87 00:19:12.727 clat percentiles (usec): 00:19:12.727 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:19:12.727 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7898], 00:19:12.727 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8717], 95.00th=[10552], 00:19:12.727 | 99.00th=[33162], 99.50th=[38536], 99.90th=[40633], 99.95th=[40633], 00:19:12.727 | 99.99th=[40633] 00:19:12.727 bw ( KiB/s): min=61696, max=70797, per=6.05%, avg=66246.50, stdev=6435.38, samples=2 00:19:12.727 iops : min= 482, max= 553, avg=517.50, stdev=50.20, samples=2 00:19:12.727 write: IOPS=530, BW=66.4MiB/s (69.6MB/s)(68.6MiB/1034msec); 0 zone resets 00:19:12.727 slat (usec): min=8, max=775, avg=30.40, stdev=60.20 00:19:12.727 clat (usec): min=11959, max=86987, avg=52193.12, stdev=6793.19 00:19:12.727 lat (usec): min=12592, max=87032, avg=52223.52, stdev=6787.09 00:19:12.727 clat percentiles (usec): 00:19:12.727 | 1.00th=[24249], 5.00th=[43254], 10.00th=[46400], 20.00th=[49021], 00:19:12.727 | 30.00th=[50594], 40.00th=[51643], 50.00th=[52691], 60.00th=[53740], 00:19:12.727 | 70.00th=[54789], 80.00th=[55837], 90.00th=[57410], 95.00th=[59507], 00:19:12.727 | 99.00th=[74974], 99.50th=[79168], 99.90th=[86508], 99.95th=[86508], 00:19:12.727 | 99.99th=[86508] 00:19:12.727 bw ( KiB/s): min=65923, max=68096, per=5.97%, avg=67009.50, stdev=1536.54, samples=2 00:19:12.727 iops : min= 515, max= 532, avg=523.50, stdev=12.02, samples=2 00:19:12.727 lat (msec) : 10=45.76%, 20=2.89%, 50=13.23%, 100=38.12% 00:19:12.727 cpu : usr=1.06%, sys=1.36%, ctx=998, majf=0, minf=1 00:19:12.727 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=97.1%, >=64=0.0% 00:19:12.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.727 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:12.727 issued rwts: total=524,549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.727 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:12.727 00:19:12.727 Run status group 0 (all jobs): 00:19:12.727 READ: bw=1069MiB/s (1121MB/s), 60.2MiB/s-74.3MiB/s (63.1MB/s-77.9MB/s), io=1113MiB (1167MB), run=1027-1041msec 00:19:12.728 WRITE: bw=1095MiB/s (1148MB/s), 66.4MiB/s-74.2MiB/s (69.6MB/s-77.8MB/s), io=1140MiB (1196MB), run=1027-1041msec 00:19:12.728 00:19:12.728 Disk stats (read/write): 00:19:12.728 sda: ios=528/491, merge=0/0, ticks=3492/24321, in_queue=27814, util=76.12% 00:19:12.728 sdb: ios=488/477, merge=0/0, ticks=3309/23907, in_queue=27217, util=73.75% 00:19:12.728 sdd: ios=564/482, merge=0/0, ticks=3854/23784, in_queue=27639, util=75.73% 00:19:12.728 sdc: ios=485/484, merge=0/0, ticks=3416/24409, in_queue=27825, util=77.19% 00:19:12.728 sde: ios=492/477, merge=0/0, ticks=3671/24092, in_queue=27764, util=77.41% 00:19:12.728 sdf: ios=565/534, merge=0/0, ticks=4151/23846, in_queue=27997, util=78.26% 00:19:12.728 sdg: ios=493/474, merge=0/0, ticks=3665/23664, in_queue=27330, util=77.83% 00:19:12.728 sdh: ios=445/470, merge=0/0, ticks=3358/24244, in_queue=27602, util=80.38% 00:19:12.728 sdi: ios=444/473, merge=0/0, ticks=3335/24106, in_queue=27442, util=81.04% 00:19:12.728 sdj: ios=523/478, merge=0/0, ticks=3986/23302, in_queue=27288, util=82.17% 00:19:12.728 sdk: ios=530/480, merge=0/0, ticks=3894/23491, in_queue=27385, util=83.27% 00:19:12.728 sdl: ios=479/470, merge=0/0, ticks=3627/23837, in_queue=27464, util=84.45% 00:19:12.728 sdm: ios=530/469, merge=0/0, ticks=4048/23271, in_queue=27320, util=84.39% 00:19:12.728 sdn: ios=520/478, merge=0/0, ticks=3933/23466, in_queue=27399, util=85.35% 00:19:12.728 sdp: ios=498/481, merge=0/0, ticks=3675/24127, in_queue=27803, util=88.78% 00:19:12.728 sdo: ios=461/468, merge=0/0, ticks=3506/23881, in_queue=27387, util=88.16% 00:19:12.728 [2024-07-23 05:10:12.920590] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.728 [2024-07-23 05:10:12.922195] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.728 [2024-07-23 05:10:12.923938] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:12.728 Cleaning up iSCSI connection 00:19:12.728 05:10:12 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@82 -- # iscsicleanup 00:19:12.728 05:10:12 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:19:12.728 05:10:12 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:19:13.309 Logging out of session [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:19:13.309 Logging out of session [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:19:13.309 Logout of [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:19:13.309 Logout of [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@983 -- # rm -rf 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@84 -- # RPCS= 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # seq 0 15 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target0\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc0\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target1\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc1\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target2\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc2\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target3\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc3\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target4\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc4\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target5\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc5\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target6\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc6\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target7\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc7\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target8\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc8\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target9\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc9\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target10\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc10\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target11\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc11\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target12\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc12\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target13\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc13\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target14\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc14\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target15\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc15\n' 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.309 05:10:13 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # echo -e iscsi_delete_target_node 'iqn.2016-06.io.spdk:Target0\nbdev_malloc_delete' 'Malloc0\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target1\nbdev_malloc_delete' 'Malloc1\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target2\nbdev_malloc_delete' 'Malloc2\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target3\nbdev_malloc_delete' 'Malloc3\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target4\nbdev_malloc_delete' 'Malloc4\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target5\nbdev_malloc_delete' 'Malloc5\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target6\nbdev_malloc_delete' 'Malloc6\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target7\nbdev_malloc_delete' 'Malloc7\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target8\nbdev_malloc_delete' 'Malloc8\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target9\nbdev_malloc_delete' 'Malloc9\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target10\nbdev_malloc_delete' 'Malloc10\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target11\nbdev_malloc_delete' 'Malloc11\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target12\nbdev_malloc_delete' 'Malloc12\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target13\nbdev_malloc_delete' 'Malloc13\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target14\nbdev_malloc_delete' 'Malloc14\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target15\nbdev_malloc_delete' 'Malloc15\n' 00:19:14.258 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@92 -- # trap 'delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:14.258 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@94 -- # killprocess 88648 00:19:14.258 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 88648 ']' 00:19:14.258 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 88648 00:19:14.258 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:19:14.258 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.258 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88648 00:19:14.258 killing process with pid 88648 00:19:14.258 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:14.258 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:14.258 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88648' 00:19:14.258 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 88648 00:19:14.258 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 88648 00:19:14.515 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@95 -- # killprocess 88683 00:19:14.516 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 88683 ']' 00:19:14.516 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 88683 00:19:14.516 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:19:14.516 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.516 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88683 00:19:14.516 killing process with pid 88683 00:19:14.516 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=spdk_trace_reco 00:19:14.516 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' spdk_trace_reco = sudo ']' 00:19:14.516 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88683' 00:19:14.516 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 88683 00:19:14.516 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 88683 00:19:14.516 05:10:14 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@96 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace -f ./tmp-trace/record.trace 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # grep 'trace entries for lcore' ./tmp-trace/record.notice 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # cut -d ' ' -f 2 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # record_num='153078 00:19:29.942 155563 00:19:29.942 160566 00:19:29.942 157599' 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # grep 'Trace Size of lcore' ./tmp-trace/trace.log 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # cut -d ' ' -f 6 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # trace_tool_num='153078 00:19:29.942 155563 00:19:29.942 160566 00:19:29.942 157599' 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@105 -- # delete_tmp_files 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@19 -- # rm -rf ./tmp-trace 00:19:29.942 entries numbers from trace record are: 153078 155563 160566 157599 00:19:29.942 entries numbers from trace tool are: 153078 155563 160566 157599 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@107 -- # echo 'entries numbers from trace record are:' 153078 155563 160566 157599 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@108 -- # echo 'entries numbers from trace tool are:' 153078 155563 160566 157599 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@110 -- # arr_record_num=($record_num) 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@111 -- # arr_trace_tool_num=($trace_tool_num) 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@112 -- # len_arr_record_num=4 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@113 -- # len_arr_trace_tool_num=4 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@116 -- # '[' 4 -ne 4 ']' 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # seq 0 3 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 153078 -le 4096 ']' 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 153078 -ne 153078 ']' 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 155563 -le 4096 ']' 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 155563 -ne 155563 ']' 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 160566 -le 4096 ']' 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 160566 -ne 160566 ']' 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 157599 -le 4096 ']' 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 157599 -ne 157599 ']' 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@135 -- # trap - SIGINT SIGTERM EXIT 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@136 -- # iscsitestfini 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:19:29.942 ************************************ 00:19:29.942 00:19:29.942 real 0m21.700s 00:19:29.942 user 0m44.890s 00:19:29.942 sys 0m3.701s 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:29.942 05:10:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:19:29.942 END TEST iscsi_tgt_trace_record 00:19:29.942 ************************************ 00:19:29.942 05:10:28 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:19:29.942 05:10:28 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@41 -- # run_test iscsi_tgt_login_redirection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:19:29.942 05:10:28 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:29.942 05:10:28 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:29.942 05:10:28 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:19:29.942 ************************************ 00:19:29.942 START TEST iscsi_tgt_login_redirection 00:19:29.942 ************************************ 00:19:29.942 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:19:29.942 * Looking for test storage... 00:19:29.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection 00:19:29.942 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:29.942 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:29.942 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@12 -- # iscsitestinit 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@14 -- # NULL_BDEV_SIZE=64 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@17 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@20 -- # rpc_addr1=/var/tmp/spdk0.sock 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@21 -- # rpc_addr2=/var/tmp/spdk1.sock 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@25 -- # timing_enter start_iscsi_tgts 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@28 -- # pid1=89548 00:19:29.943 Process pid: 89548 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@29 -- # echo 'Process pid: 89548' 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@27 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -i 0 -m 0x1 --wait-for-rpc 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@32 -- # pid2=89549 00:19:29.943 Process pid: 89549 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@33 -- # echo 'Process pid: 89549' 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@35 -- # trap 'killprocess $pid1; killprocess $pid2; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@37 -- # waitforlisten 89548 /var/tmp/spdk0.sock 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -i 1 -m 0x2 --wait-for-rpc 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 89548 ']' 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:19:29.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.943 05:10:29 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:19:29.943 [2024-07-23 05:10:29.221957] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:19:29.943 [2024-07-23 05:10:29.221952] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:19:29.943 [2024-07-23 05:10:29.222083] [ DPDK EAL parameters: iscsi -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:[2024-07-23 05:10:29.222083] [ DPDK EAL parameters: iscsi -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=sp5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:29.943 dk0 --proc-type=auto ] 00:19:29.943 [2024-07-23 05:10:29.373075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.943 [2024-07-23 05:10:29.377409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.943 [2024-07-23 05:10:29.476106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.943 [2024-07-23 05:10:29.488561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.943 05:10:30 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.943 05:10:30 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:19:29.943 05:10:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_set_options -w 0 -o 30 -a 16 00:19:30.204 05:10:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock framework_start_init 00:19:30.770 iscsi_tgt_1 is listening. 00:19:30.770 05:10:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@40 -- # echo 'iscsi_tgt_1 is listening.' 00:19:30.770 05:10:30 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@42 -- # waitforlisten 89549 /var/tmp/spdk1.sock 00:19:30.770 05:10:30 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 89549 ']' 00:19:30.770 05:10:30 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:19:30.770 05:10:30 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:19:30.770 05:10:30 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:19:30.770 05:10:30 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.770 05:10:30 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:19:31.029 05:10:31 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.029 05:10:31 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:19:31.029 05:10:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_set_options -w 0 -o 30 -a 16 00:19:31.287 05:10:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock framework_start_init 00:19:31.546 iscsi_tgt_2 is listening. 00:19:31.546 05:10:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@45 -- # echo 'iscsi_tgt_2 is listening.' 00:19:31.546 05:10:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@47 -- # timing_exit start_iscsi_tgts 00:19:31.546 05:10:31 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:31.546 05:10:31 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:19:31.804 05:10:31 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:19:31.804 05:10:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 10.0.0.1:3260 00:19:32.063 05:10:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock bdev_null_create Null0 64 512 00:19:32.322 Null0 00:19:32.322 05:10:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:19:32.581 05:10:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:19:32.839 05:10:32 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 10.0.0.3:3260 -p 00:19:33.097 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock bdev_null_create Null0 64 512 00:19:33.354 Null0 00:19:33.354 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@67 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:19:33.612 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@68 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:19:33.612 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:33.612 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@69 -- # waitforiscsidevices 1 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@116 -- # local num=1 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:19:33.612 [2024-07-23 05:10:33.723507] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # n=1 00:19:33.612 FIO pid: 89644 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@123 -- # return 0 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@72 -- # fiopid=89644 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t randrw -r 15 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@73 -- # echo 'FIO pid: 89644' 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@75 -- # trap 'iscsicleanup; killprocess $pid1; killprocess $pid2; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:19:33.612 05:10:33 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # jq length 00:19:33.612 [global] 00:19:33.612 thread=1 00:19:33.612 invalidate=1 00:19:33.612 rw=randrw 00:19:33.612 time_based=1 00:19:33.612 runtime=15 00:19:33.612 ioengine=libaio 00:19:33.612 direct=1 00:19:33.612 bs=512 00:19:33.612 iodepth=1 00:19:33.612 norandommap=1 00:19:33.612 numjobs=1 00:19:33.612 00:19:33.612 [job0] 00:19:33.612 filename=/dev/sda 00:19:33.612 queue_depth set to 113 (sda) 00:19:33.871 job0: (g=0): rw=randrw, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:19:33.871 fio-3.35 00:19:33.871 Starting 1 thread 00:19:33.871 [2024-07-23 05:10:33.895564] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:33.871 05:10:34 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # '[' 1 = 1 ']' 00:19:33.871 05:10:34 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # jq length 00:19:33.871 05:10:34 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:19:34.155 05:10:34 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # '[' 0 = 0 ']' 00:19:34.155 05:10:34 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 -a 10.0.0.3 -p 3260 00:19:34.414 05:10:34 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:19:34.980 05:10:34 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@85 -- # sleep 5 00:19:40.285 05:10:39 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:19:40.285 05:10:39 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # jq length 00:19:40.285 05:10:40 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # '[' 0 = 0 ']' 00:19:40.285 05:10:40 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:19:40.285 05:10:40 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # jq length 00:19:40.285 05:10:40 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # '[' 1 = 1 ']' 00:19:40.285 05:10:40 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 00:19:40.543 05:10:40 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:19:40.802 05:10:40 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@93 -- # sleep 5 00:19:46.066 05:10:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:19:46.066 05:10:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # jq length 00:19:46.066 05:10:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # '[' 1 = 1 ']' 00:19:46.066 05:10:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:19:46.066 05:10:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # jq length 00:19:46.324 05:10:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # '[' 0 = 0 ']' 00:19:46.324 05:10:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@98 -- # wait 89644 00:19:48.854 [2024-07-23 05:10:49.000937] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:48.854 00:19:48.854 job0: (groupid=0, jobs=1): err= 0: pid=89673: Tue Jul 23 05:10:49 2024 00:19:48.854 read: IOPS=4661, BW=2331KiB/s (2387kB/s)(34.1MiB/15001msec) 00:19:48.854 slat (nsec): min=4140, max=95813, avg=6620.98, stdev=1846.73 00:19:48.854 clat (usec): min=37, max=2005.7k, avg=99.55, stdev=7584.64 00:19:48.854 lat (usec): min=65, max=2005.7k, avg=106.17, stdev=7584.66 00:19:48.854 clat percentiles (usec): 00:19:48.854 | 1.00th=[ 63], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 66], 00:19:48.854 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 69], 60.00th=[ 71], 00:19:48.854 | 70.00th=[ 72], 80.00th=[ 76], 90.00th=[ 82], 95.00th=[ 86], 00:19:48.854 | 99.00th=[ 97], 99.50th=[ 103], 99.90th=[ 127], 99.95th=[ 174], 00:19:48.854 | 99.99th=[ 685] 00:19:48.854 bw ( KiB/s): min= 537, max= 3384, per=100.00%, avg=2909.61, stdev=754.04, samples=23 00:19:48.854 iops : min= 1074, max= 6768, avg=5819.22, stdev=1508.07, samples=23 00:19:48.854 write: IOPS=4632, BW=2316KiB/s (2372kB/s)(33.9MiB/15001msec); 0 zone resets 00:19:48.854 slat (nsec): min=3985, max=71077, avg=6437.56, stdev=2030.60 00:19:48.854 clat (usec): min=33, max=2006.3k, avg=100.71, stdev=7610.20 00:19:48.854 lat (usec): min=65, max=2006.3k, avg=107.15, stdev=7610.20 00:19:48.854 clat percentiles (usec): 00:19:48.854 | 1.00th=[ 64], 5.00th=[ 65], 10.00th=[ 66], 20.00th=[ 67], 00:19:48.854 | 30.00th=[ 68], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 72], 00:19:48.854 | 70.00th=[ 73], 80.00th=[ 77], 90.00th=[ 83], 95.00th=[ 87], 00:19:48.854 | 99.00th=[ 98], 99.50th=[ 104], 99.90th=[ 128], 99.95th=[ 200], 00:19:48.854 | 99.99th=[ 660] 00:19:48.854 bw ( KiB/s): min= 569, max= 3412, per=100.00%, avg=2895.87, stdev=740.83, samples=23 00:19:48.854 iops : min= 1138, max= 6824, avg=5791.74, stdev=1481.66, samples=23 00:19:48.854 lat (usec) : 50=0.07%, 100=99.19%, 250=0.71%, 500=0.02%, 750=0.01% 00:19:48.854 lat (usec) : 1000=0.01% 00:19:48.854 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:19:48.854 cpu : usr=3.37%, sys=7.33%, ctx=139725, majf=0, minf=1 00:19:48.854 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.854 issued rwts: total=69925,69495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.854 00:19:48.854 Run status group 0 (all jobs): 00:19:48.854 READ: bw=2331KiB/s (2387kB/s), 2331KiB/s-2331KiB/s (2387kB/s-2387kB/s), io=34.1MiB (35.8MB), run=15001-15001msec 00:19:48.854 WRITE: bw=2316KiB/s (2372kB/s), 2316KiB/s-2316KiB/s (2372kB/s-2372kB/s), io=33.9MiB (35.6MB), run=15001-15001msec 00:19:48.854 00:19:48.854 Disk stats (read/write): 00:19:48.854 sda: ios=69308/68878, merge=0/0, ticks=6856/6894, in_queue=13751, util=99.51% 00:19:48.854 Cleaning up iSCSI connection 00:19:48.854 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@100 -- # trap - SIGINT SIGTERM EXIT 00:19:48.854 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@102 -- # iscsicleanup 00:19:48.854 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:19:48.854 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:19:49.113 Logging out of session [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:49.113 Logout of [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@983 -- # rm -rf 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@103 -- # killprocess 89548 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 89548 ']' 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 89548 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89548 00:19:49.113 killing process with pid 89548 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89548' 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 89548 00:19:49.113 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 89548 00:19:49.372 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@104 -- # killprocess 89549 00:19:49.372 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 89549 ']' 00:19:49.372 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 89549 00:19:49.372 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:19:49.372 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.372 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89549 00:19:49.372 killing process with pid 89549 00:19:49.372 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:49.372 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:49.372 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89549' 00:19:49.372 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 89549 00:19:49.372 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 89549 00:19:49.940 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@105 -- # iscsitestfini 00:19:49.940 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:19:49.940 00:19:49.940 real 0m20.902s 00:19:49.940 user 0m41.534s 00:19:49.940 sys 0m5.870s 00:19:49.940 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:49.940 05:10:49 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:19:49.940 ************************************ 00:19:49.940 END TEST iscsi_tgt_login_redirection 00:19:49.940 ************************************ 00:19:49.940 05:10:49 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:19:49.940 05:10:49 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@42 -- # run_test iscsi_tgt_digests /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:19:49.940 05:10:49 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:49.940 05:10:49 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.940 05:10:49 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:19:49.940 ************************************ 00:19:49.940 START TEST iscsi_tgt_digests 00:19:49.940 ************************************ 00:19:49.940 05:10:49 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:19:49.940 * Looking for test storage... 00:19:49.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@11 -- # iscsitestinit 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@49 -- # MALLOC_BDEV_SIZE=64 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@52 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@54 -- # timing_enter start_iscsi_tgt 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@57 -- # pid=89929 00:19:49.940 Process pid: 89929 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@58 -- # echo 'Process pid: 89929' 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@60 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@56 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@62 -- # waitforlisten 89929 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@829 -- # '[' -z 89929 ']' 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.940 05:10:50 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:19:49.940 [2024-07-23 05:10:50.135261] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:19:49.940 [2024-07-23 05:10:50.135380] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89929 ] 00:19:50.198 [2024-07-23 05:10:50.272044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.198 [2024-07-23 05:10:50.371523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.198 [2024-07-23 05:10:50.371584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.198 [2024-07-23 05:10:50.371696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:50.198 [2024-07-23 05:10:50.371703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.134 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.134 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@862 -- # return 0 00:19:51.134 05:10:51 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@63 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:19:51.134 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.134 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:19:51.134 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.134 05:10:51 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@64 -- # rpc_cmd framework_start_init 00:19:51.134 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.134 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.393 iscsi_tgt is listening. Running tests... 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@65 -- # echo 'iscsi_tgt is listening. Running tests...' 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@67 -- # timing_exit start_iscsi_tgt 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@69 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@70 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@71 -- # rpc_cmd bdev_malloc_create 64 512 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:19:51.393 Malloc0 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@76 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.393 05:10:51 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@77 -- # sleep 1 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@79 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:19:52.329 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.DataDigest' -v None 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # true 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # DataDigestAbility='iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:19:52.329 iscsiadm: Could not execute operation on all records: invalid parameter' 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@84 -- # '[' 'iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:19:52.329 iscsiadm: Could not execute operation on all records: invalid parameterx' '!=' x ']' 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@85 -- # run_test iscsi_tgt_digest iscsi_header_digest_test 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:19:52.329 ************************************ 00:19:52.329 START TEST iscsi_tgt_digest 00:19:52.329 ************************************ 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1123 -- # iscsi_header_digest_test 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@27 -- # node_login_fio_logout 'HeaderDigest -v CRC32C' 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:19:52.329 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:19:52.329 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:19:52.329 [2024-07-23 05:10:52.534515] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:19:52.329 05:10:52 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:19:52.588 [global] 00:19:52.588 thread=1 00:19:52.588 invalidate=1 00:19:52.588 rw=write 00:19:52.588 time_based=1 00:19:52.588 runtime=2 00:19:52.588 ioengine=libaio 00:19:52.588 direct=1 00:19:52.588 bs=512 00:19:52.588 iodepth=1 00:19:52.588 norandommap=1 00:19:52.588 numjobs=1 00:19:52.588 00:19:52.588 [job0] 00:19:52.588 filename=/dev/sda 00:19:52.588 queue_depth set to 113 (sda) 00:19:52.588 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:19:52.588 fio-3.35 00:19:52.588 Starting 1 thread 00:19:52.588 [2024-07-23 05:10:52.708861] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.124 [2024-07-23 05:10:54.817703] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.124 00:19:55.124 job0: (groupid=0, jobs=1): err= 0: pid=90027: Tue Jul 23 05:10:54 2024 00:19:55.124 write: IOPS=10.1k, BW=5065KiB/s (5186kB/s)(9.90MiB/2001msec); 0 zone resets 00:19:55.124 slat (nsec): min=4256, max=62480, avg=6284.25, stdev=2100.65 00:19:55.124 clat (usec): min=76, max=1083, avg=91.67, stdev=13.41 00:19:55.124 lat (usec): min=81, max=1097, avg=97.95, stdev=14.14 00:19:55.124 clat percentiles (usec): 00:19:55.124 | 1.00th=[ 80], 5.00th=[ 82], 10.00th=[ 83], 20.00th=[ 85], 00:19:55.124 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 91], 00:19:55.124 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 105], 95.00th=[ 112], 00:19:55.124 | 99.00th=[ 124], 99.50th=[ 129], 99.90th=[ 165], 99.95th=[ 243], 00:19:55.124 | 99.99th=[ 453] 00:19:55.124 bw ( KiB/s): min= 4720, max= 5214, per=98.49%, avg=4988.00, stdev=249.66, samples=3 00:19:55.124 iops : min= 9440, max=10428, avg=9976.00, stdev=499.33, samples=3 00:19:55.124 lat (usec) : 100=82.64%, 250=17.31%, 500=0.04%, 750=0.01% 00:19:55.124 lat (msec) : 2=0.01% 00:19:55.124 cpu : usr=2.40%, sys=8.60%, ctx=20275, majf=0, minf=1 00:19:55.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.124 issued rwts: total=0,20269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.124 00:19:55.124 Run status group 0 (all jobs): 00:19:55.124 WRITE: bw=5065KiB/s (5186kB/s), 5065KiB/s-5065KiB/s (5186kB/s-5186kB/s), io=9.90MiB (10.4MB), run=2001-2001msec 00:19:55.124 00:19:55.124 Disk stats (read/write): 00:19:55.125 sda: ios=48/19082, merge=0/0, ticks=9/1745, in_queue=1754, util=95.37% 00:19:55.125 05:10:54 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:19:55.125 [global] 00:19:55.125 thread=1 00:19:55.125 invalidate=1 00:19:55.125 rw=read 00:19:55.125 time_based=1 00:19:55.125 runtime=2 00:19:55.125 ioengine=libaio 00:19:55.125 direct=1 00:19:55.125 bs=512 00:19:55.125 iodepth=1 00:19:55.125 norandommap=1 00:19:55.125 numjobs=1 00:19:55.125 00:19:55.125 [job0] 00:19:55.125 filename=/dev/sda 00:19:55.125 queue_depth set to 113 (sda) 00:19:55.125 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:19:55.125 fio-3.35 00:19:55.125 Starting 1 thread 00:19:57.051 00:19:57.051 job0: (groupid=0, jobs=1): err= 0: pid=90080: Tue Jul 23 05:10:57 2024 00:19:57.051 read: IOPS=11.4k, BW=5718KiB/s (5855kB/s)(11.2MiB/2001msec) 00:19:57.051 slat (nsec): min=4262, max=71560, avg=6623.40, stdev=2166.39 00:19:57.051 clat (usec): min=33, max=2188, avg=80.06, stdev=18.16 00:19:57.051 lat (usec): min=74, max=2201, avg=86.68, stdev=18.66 00:19:57.051 clat percentiles (usec): 00:19:57.051 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 76], 00:19:57.051 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 80], 00:19:57.051 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 92], 00:19:57.051 | 99.00th=[ 100], 99.50th=[ 105], 99.90th=[ 202], 99.95th=[ 277], 00:19:57.051 | 99.99th=[ 758] 00:19:57.051 bw ( KiB/s): min= 5432, max= 5914, per=99.58%, avg=5694.67, stdev=243.90, samples=3 00:19:57.051 iops : min=10864, max=11828, avg=11389.33, stdev=487.81, samples=3 00:19:57.051 lat (usec) : 50=0.01%, 100=98.94%, 250=0.99%, 500=0.03%, 750=0.01% 00:19:57.051 lat (usec) : 1000=0.01% 00:19:57.051 lat (msec) : 4=0.01% 00:19:57.051 cpu : usr=3.90%, sys=9.45%, ctx=22886, majf=0, minf=1 00:19:57.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.051 issued rwts: total=22883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:57.051 00:19:57.051 Run status group 0 (all jobs): 00:19:57.051 READ: bw=5718KiB/s (5855kB/s), 5718KiB/s-5718KiB/s (5855kB/s-5855kB/s), io=11.2MiB (11.7MB), run=2001-2001msec 00:19:57.051 00:19:57.051 Disk stats (read/write): 00:19:57.051 sda: ios=21610/0, merge=0/0, ticks=1710/0, in_queue=1710, util=95.08% 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:19:57.051 Logging out of session [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:19:57.051 Logout of [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:19:57.051 iscsiadm: No active sessions. 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@31 -- # node_login_fio_logout 'HeaderDigest -v CRC32C,None' 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C,None 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:19:57.051 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:19:57.051 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:19:57.051 [2024-07-23 05:10:57.216657] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:57.051 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:19:57.052 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:19:57.052 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:19:57.052 05:10:57 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:19:57.052 [global] 00:19:57.052 thread=1 00:19:57.052 invalidate=1 00:19:57.052 rw=write 00:19:57.052 time_based=1 00:19:57.052 runtime=2 00:19:57.052 ioengine=libaio 00:19:57.052 direct=1 00:19:57.052 bs=512 00:19:57.052 iodepth=1 00:19:57.052 norandommap=1 00:19:57.052 numjobs=1 00:19:57.052 00:19:57.052 [job0] 00:19:57.052 filename=/dev/sda 00:19:57.052 queue_depth set to 113 (sda) 00:19:57.310 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:19:57.310 fio-3.35 00:19:57.310 Starting 1 thread 00:19:57.310 [2024-07-23 05:10:57.382118] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:59.852 [2024-07-23 05:10:59.497068] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:59.852 00:19:59.852 job0: (groupid=0, jobs=1): err= 0: pid=90145: Tue Jul 23 05:10:59 2024 00:19:59.852 write: IOPS=10.4k, BW=5210KiB/s (5335kB/s)(10.2MiB/2001msec); 0 zone resets 00:19:59.852 slat (nsec): min=3878, max=58151, avg=6023.49, stdev=1318.42 00:19:59.852 clat (usec): min=76, max=2628, avg=89.19, stdev=24.82 00:19:59.852 lat (usec): min=83, max=2637, avg=95.21, stdev=25.00 00:19:59.852 clat percentiles (usec): 00:19:59.852 | 1.00th=[ 81], 5.00th=[ 83], 10.00th=[ 83], 20.00th=[ 85], 00:19:59.852 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 89], 00:19:59.852 | 70.00th=[ 90], 80.00th=[ 92], 90.00th=[ 98], 95.00th=[ 101], 00:19:59.852 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 151], 99.95th=[ 285], 00:19:59.852 | 99.99th=[ 570] 00:19:59.852 bw ( KiB/s): min= 5030, max= 5329, per=99.95%, avg=5208.67, stdev=157.80, samples=3 00:19:59.852 iops : min=10060, max=10658, avg=10417.33, stdev=315.61, samples=3 00:19:59.852 lat (usec) : 100=93.78%, 250=6.16%, 500=0.04%, 750=0.01% 00:19:59.852 lat (msec) : 4=0.01% 00:19:59.852 cpu : usr=2.30%, sys=8.75%, ctx=20852, majf=0, minf=1 00:19:59.852 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:59.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.852 issued rwts: total=0,20852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.852 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:59.852 00:19:59.852 Run status group 0 (all jobs): 00:19:59.852 WRITE: bw=5210KiB/s (5335kB/s), 5210KiB/s-5210KiB/s (5335kB/s-5335kB/s), io=10.2MiB (10.7MB), run=2001-2001msec 00:19:59.852 00:19:59.852 Disk stats (read/write): 00:19:59.852 sda: ios=48/19623, merge=0/0, ticks=9/1736, in_queue=1746, util=95.26% 00:19:59.852 05:10:59 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:19:59.852 [global] 00:19:59.852 thread=1 00:19:59.852 invalidate=1 00:19:59.852 rw=read 00:19:59.852 time_based=1 00:19:59.852 runtime=2 00:19:59.852 ioengine=libaio 00:19:59.852 direct=1 00:19:59.852 bs=512 00:19:59.852 iodepth=1 00:19:59.852 norandommap=1 00:19:59.852 numjobs=1 00:19:59.852 00:19:59.852 [job0] 00:19:59.852 filename=/dev/sda 00:19:59.852 queue_depth set to 113 (sda) 00:19:59.852 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:19:59.852 fio-3.35 00:19:59.852 Starting 1 thread 00:20:01.757 00:20:01.757 job0: (groupid=0, jobs=1): err= 0: pid=90204: Tue Jul 23 05:11:01 2024 00:20:01.757 read: IOPS=11.3k, BW=5664KiB/s (5800kB/s)(11.1MiB/2001msec) 00:20:01.757 slat (nsec): min=3990, max=36969, avg=6020.97, stdev=1035.94 00:20:01.757 clat (usec): min=68, max=2216, avg=81.55, stdev=22.64 00:20:01.757 lat (usec): min=74, max=2222, avg=87.57, stdev=22.74 00:20:01.757 clat percentiles (usec): 00:20:01.757 | 1.00th=[ 73], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 77], 00:20:01.757 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 82], 00:20:01.757 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 90], 95.00th=[ 93], 00:20:01.757 | 99.00th=[ 101], 99.50th=[ 108], 99.90th=[ 204], 99.95th=[ 293], 00:20:01.757 | 99.99th=[ 865] 00:20:01.757 bw ( KiB/s): min= 5599, max= 5689, per=99.52%, avg=5637.33, stdev=46.46, samples=3 00:20:01.757 iops : min=11198, max=11378, avg=11274.67, stdev=92.92, samples=3 00:20:01.757 lat (usec) : 100=98.80%, 250=1.13%, 500=0.04%, 750=0.02%, 1000=0.01% 00:20:01.757 lat (msec) : 4=0.01% 00:20:01.757 cpu : usr=3.35%, sys=8.55%, ctx=22667, majf=0, minf=1 00:20:01.757 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.757 issued rwts: total=22667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:01.757 00:20:01.757 Run status group 0 (all jobs): 00:20:01.757 READ: bw=5664KiB/s (5800kB/s), 5664KiB/s-5664KiB/s (5800kB/s-5800kB/s), io=11.1MiB (11.6MB), run=2001-2001msec 00:20:01.757 00:20:01.757 Disk stats (read/write): 00:20:01.757 sda: ios=21374/0, merge=0/0, ticks=1734/0, in_queue=1734, util=95.03% 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:20:01.757 Logging out of session [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:01.757 Logout of [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:01.757 iscsiadm: No active sessions. 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:20:01.757 ************************************ 00:20:01.757 END TEST iscsi_tgt_digest 00:20:01.757 ************************************ 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:20:01.757 00:20:01.757 real 0m9.406s 00:20:01.757 user 0m0.716s 00:20:01.757 sys 0m0.990s 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@10 -- # set +x 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1142 -- # return 0 00:20:01.757 Cleaning up iSCSI connection 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@92 -- # iscsicleanup 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:20:01.757 iscsiadm: No matching sessions found 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # true 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # rm -rf 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@93 -- # killprocess 89929 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@948 -- # '[' -z 89929 ']' 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@952 -- # kill -0 89929 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # uname 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89929 00:20:01.757 killing process with pid 89929 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89929' 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@967 -- # kill 89929 00:20:01.757 05:11:01 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@972 -- # wait 89929 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@94 -- # iscsitestfini 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:20:02.327 00:20:02.327 real 0m12.405s 00:20:02.327 user 0m45.052s 00:20:02.327 sys 0m3.712s 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:02.327 ************************************ 00:20:02.327 END TEST iscsi_tgt_digests 00:20:02.327 ************************************ 00:20:02.327 05:11:02 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:20:02.327 05:11:02 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@43 -- # run_test iscsi_tgt_fuzz /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:20:02.327 05:11:02 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:02.327 05:11:02 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.327 05:11:02 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:20:02.327 ************************************ 00:20:02.327 START TEST iscsi_tgt_fuzz 00:20:02.327 ************************************ 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:20:02.327 * Looking for test storage... 00:20:02.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/fuzz 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@11 -- # iscsitestinit 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@13 -- # '[' -z 10.0.0.1 ']' 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@18 -- # '[' -z 10.0.0.2 ']' 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@23 -- # timing_enter iscsi_fuzz_test 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@25 -- # MALLOC_BDEV_SIZE=64 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@26 -- # MALLOC_BLOCK_SIZE=4096 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@28 -- # TEST_TIMEOUT=1200 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@31 -- # for i in "$@" 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@32 -- # case "$i" in 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@34 -- # TEST_TIMEOUT=30 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@39 -- # timing_enter start_iscsi_tgt 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:02.327 Process iscsipid: 90299 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@42 -- # iscsipid=90299 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --disable-cpumask-locks --wait-for-rpc 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@43 -- # echo 'Process iscsipid: 90299' 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@45 -- # trap 'killprocess $iscsipid; exit 1' SIGINT SIGTERM EXIT 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@47 -- # waitforlisten 90299 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@829 -- # '[' -z 90299 ']' 00:20:02.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.327 05:11:02 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:03.264 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.264 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@862 -- # return 0 00:20:03.264 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@49 -- # rpc_cmd iscsi_set_options -o 60 -a 16 00:20:03.264 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.264 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:03.264 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.264 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@50 -- # rpc_cmd framework_start_init 00:20:03.264 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.264 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.528 iscsi_tgt is listening. Running tests... 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@51 -- # echo 'iscsi_tgt is listening. Running tests...' 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@52 -- # timing_exit start_iscsi_tgt 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@54 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@55 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@56 -- # rpc_cmd bdev_malloc_create 64 4096 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.528 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:03.788 Malloc0 00:20:03.788 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.788 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@57 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:20:03.788 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.788 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:03.788 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.788 05:11:03 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@58 -- # sleep 1 00:20:04.723 05:11:04 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@60 -- # trap 'killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.723 05:11:04 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/iscsi_fuzz/iscsi_fuzz -m 0xF0 -T 10.0.0.1 -t 30 00:20:36.887 Fuzzing completed. Shutting down the fuzz application. 00:20:36.887 00:20:36.887 device 0x2077110 stats: Sent 11367 valid opcode PDUs, 103519 invalid opcode PDUs. 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@64 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:disk1 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@67 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@71 -- # killprocess 90299 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@948 -- # '[' -z 90299 ']' 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@952 -- # kill -0 90299 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # uname 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90299 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:36.887 killing process with pid 90299 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90299' 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@967 -- # kill 90299 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@972 -- # wait 90299 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@73 -- # iscsitestfini 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@75 -- # timing_exit iscsi_fuzz_test 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:36.887 00:20:36.887 real 0m33.243s 00:20:36.887 user 3m9.733s 00:20:36.887 sys 0m16.229s 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:36.887 ************************************ 00:20:36.887 END TEST iscsi_tgt_fuzz 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:36.887 ************************************ 00:20:36.887 05:11:35 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:20:36.887 05:11:35 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@44 -- # run_test iscsi_tgt_multiconnection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:20:36.887 05:11:35 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:36.887 05:11:35 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:36.887 05:11:35 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:20:36.887 ************************************ 00:20:36.887 START TEST iscsi_tgt_multiconnection 00:20:36.887 ************************************ 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:20:36.887 * Looking for test storage... 00:20:36.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection 00:20:36.887 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@11 -- # iscsitestinit 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@18 -- # CONNECTION_NUMBER=30 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@40 -- # timing_enter start_iscsi_tgt 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@42 -- # iscsipid=90723 00:20:36.888 iSCSI target launched. pid: 90723 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@43 -- # echo 'iSCSI target launched. pid: 90723' 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@44 -- # trap 'remove_backends; iscsicleanup; killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@46 -- # waitforlisten 90723 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 90723 ']' 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.888 05:11:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:36.888 [2024-07-23 05:11:35.868528] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:20:36.888 [2024-07-23 05:11:35.868635] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90723 ] 00:20:36.888 [2024-07-23 05:11:36.003035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.888 [2024-07-23 05:11:36.093361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.888 05:11:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.888 05:11:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:20:36.888 05:11:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 128 00:20:36.888 05:11:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:37.456 05:11:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:37.456 05:11:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:37.714 05:11:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@50 -- # timing_exit start_iscsi_tgt 00:20:37.714 05:11:37 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:37.714 05:11:37 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:37.714 05:11:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:20:38.282 05:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:20:38.540 Creating an iSCSI target node. 00:20:38.540 05:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@55 -- # echo 'Creating an iSCSI target node.' 00:20:38.540 05:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs0 -c 1048576 00:20:38.798 05:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # ls_guid=121fe12e-e42d-4c8c-9498-49d09234db5e 00:20:38.798 05:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@59 -- # get_lvs_free_mb 121fe12e-e42d-4c8c-9498-49d09234db5e 00:20:38.798 05:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1364 -- # local lvs_uuid=121fe12e-e42d-4c8c-9498-49d09234db5e 00:20:38.798 05:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1365 -- # local lvs_info 00:20:38.798 05:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1366 -- # local fc 00:20:38.798 05:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1367 -- # local cs 00:20:38.798 05:11:38 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:39.057 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:20:39.057 { 00:20:39.057 "uuid": "121fe12e-e42d-4c8c-9498-49d09234db5e", 00:20:39.057 "name": "lvs0", 00:20:39.057 "base_bdev": "Nvme0n1", 00:20:39.057 "total_data_clusters": 5099, 00:20:39.057 "free_clusters": 5099, 00:20:39.057 "block_size": 4096, 00:20:39.057 "cluster_size": 1048576 00:20:39.057 } 00:20:39.057 ]' 00:20:39.057 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="121fe12e-e42d-4c8c-9498-49d09234db5e") .free_clusters' 00:20:39.057 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # fc=5099 00:20:39.057 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="121fe12e-e42d-4c8c-9498-49d09234db5e") .cluster_size' 00:20:39.057 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # cs=1048576 00:20:39.057 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1373 -- # free_mb=5099 00:20:39.057 5099 00:20:39.057 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1374 -- # echo 5099 00:20:39.057 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@60 -- # lvol_bdev_size=169 00:20:39.057 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # seq 1 30 00:20:39.057 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:39.057 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_1 169 00:20:39.315 22dd9acd-af05-4771-9a1a-fb317ca57322 00:20:39.315 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:39.315 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_2 169 00:20:39.573 f1344b15-5e21-457a-be45-4341661ba9b5 00:20:39.573 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:39.573 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_3 169 00:20:39.832 96dde8b1-e11f-4aa0-8cdc-a9f432cb4a51 00:20:39.832 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:39.832 05:11:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_4 169 00:20:40.090 5ec3b442-56f1-49a2-86e7-ed73e22f198e 00:20:40.090 05:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:40.090 05:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_5 169 00:20:40.349 0977cbca-d27c-44cb-814c-b494bc065668 00:20:40.349 05:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:40.349 05:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_6 169 00:20:40.607 552501b5-f707-4323-98c1-458f6960b95b 00:20:40.607 05:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:40.607 05:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_7 169 00:20:40.866 33a8293b-8421-4de4-ad2d-2beaa80065ff 00:20:40.866 05:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:40.866 05:11:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_8 169 00:20:41.124 20ee0042-72a7-461c-8950-826f4ed044c9 00:20:41.124 05:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:41.124 05:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_9 169 00:20:41.382 818f2c74-abf7-48b4-a10f-1aaa6d8f9306 00:20:41.382 05:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:41.382 05:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_10 169 00:20:41.640 5da7b3e1-8d86-4bee-a3eb-f4c5e5a5e642 00:20:41.640 05:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:41.640 05:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_11 169 00:20:41.898 01c84ac6-4d0f-4038-bf61-1a210cd9979b 00:20:41.898 05:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:41.898 05:11:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_12 169 00:20:42.157 a707c1e4-dae5-4a15-86ca-835f78cb64a5 00:20:42.157 05:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:42.157 05:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_13 169 00:20:42.415 b9a5e644-297d-42fe-94dd-11dc863a23da 00:20:42.415 05:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:42.415 05:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_14 169 00:20:42.415 79c74604-49bb-4f1e-9133-2047fd540323 00:20:42.674 05:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:42.674 05:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_15 169 00:20:42.932 db2e837a-2a2c-4bdc-9940-6d77bf865b0b 00:20:42.932 05:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:42.932 05:11:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_16 169 00:20:43.190 5384cb09-ed26-47c3-8dad-11e8c316ce00 00:20:43.190 05:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:43.190 05:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_17 169 00:20:43.190 60db7383-cb97-4f0a-9be1-2b88ce1fa264 00:20:43.190 05:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:43.190 05:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_18 169 00:20:43.448 b8884362-7314-43ec-951d-c3a8e60bfdd2 00:20:43.448 05:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:43.448 05:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_19 169 00:20:43.705 58765a23-b07f-408e-b89b-4e20397e7523 00:20:43.705 05:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:43.705 05:11:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_20 169 00:20:43.963 3323802a-d06e-4b55-9ce0-198e942b23f2 00:20:43.963 05:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:43.963 05:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_21 169 00:20:44.229 8c6792b7-35a8-4796-85ec-faa22a308af8 00:20:44.229 05:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:44.229 05:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_22 169 00:20:44.503 40fc434b-4cee-406e-be56-4612073f3af5 00:20:44.503 05:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:44.503 05:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_23 169 00:20:44.761 ddeb4272-0cd0-41a3-b8bb-322ffaaf7739 00:20:44.761 05:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:44.761 05:11:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_24 169 00:20:45.019 78d5c641-7499-4e1c-a6b1-ecfb121003af 00:20:45.019 05:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:45.019 05:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_25 169 00:20:45.278 7a63479e-3601-4e88-ab2b-5538b8e82c7e 00:20:45.278 05:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:45.278 05:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_26 169 00:20:45.537 380e8e46-d8eb-4366-8a3f-2b6392b0bfb7 00:20:45.537 05:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:45.537 05:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_27 169 00:20:45.796 cb760e08-0c01-463d-819a-77592ae9aba9 00:20:45.796 05:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:45.796 05:11:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_28 169 00:20:46.055 1fa361ba-a00b-4ac8-80fe-0e3f9b0ab3bd 00:20:46.055 05:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:46.055 05:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_29 169 00:20:46.320 be7dc669-6f49-4674-b307-9e90a526f865 00:20:46.320 05:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:46.320 05:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 121fe12e-e42d-4c8c-9498-49d09234db5e lbd_30 169 00:20:46.320 b5bf3155-3fa9-4474-9539-bb998dca6dc7 00:20:46.320 05:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # seq 1 30 00:20:46.591 05:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:46.591 05:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_1:0 00:20:46.591 05:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias lvs0/lbd_1:0 1:2 256 -d 00:20:46.591 05:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:46.591 05:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_2:0 00:20:46.591 05:11:46 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias lvs0/lbd_2:0 1:2 256 -d 00:20:46.848 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:46.848 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_3:0 00:20:46.848 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias lvs0/lbd_3:0 1:2 256 -d 00:20:47.106 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:47.106 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_4:0 00:20:47.106 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias lvs0/lbd_4:0 1:2 256 -d 00:20:47.363 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:47.363 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_5:0 00:20:47.363 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias lvs0/lbd_5:0 1:2 256 -d 00:20:47.621 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:47.621 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_6:0 00:20:47.621 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias lvs0/lbd_6:0 1:2 256 -d 00:20:47.879 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:47.879 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_7:0 00:20:47.879 05:11:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias lvs0/lbd_7:0 1:2 256 -d 00:20:48.137 05:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:48.137 05:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_8:0 00:20:48.137 05:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias lvs0/lbd_8:0 1:2 256 -d 00:20:48.413 05:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:48.413 05:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_9:0 00:20:48.413 05:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias lvs0/lbd_9:0 1:2 256 -d 00:20:48.672 05:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:48.672 05:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_10:0 00:20:48.672 05:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias lvs0/lbd_10:0 1:2 256 -d 00:20:48.930 05:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:48.930 05:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_11:0 00:20:48.930 05:11:48 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target11 Target11_alias lvs0/lbd_11:0 1:2 256 -d 00:20:49.188 05:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:49.188 05:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_12:0 00:20:49.188 05:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target12 Target12_alias lvs0/lbd_12:0 1:2 256 -d 00:20:49.445 05:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:49.446 05:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_13:0 00:20:49.446 05:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target13 Target13_alias lvs0/lbd_13:0 1:2 256 -d 00:20:49.704 05:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:49.704 05:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_14:0 00:20:49.704 05:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target14 Target14_alias lvs0/lbd_14:0 1:2 256 -d 00:20:49.962 05:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:49.962 05:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_15:0 00:20:49.962 05:11:49 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target15 Target15_alias lvs0/lbd_15:0 1:2 256 -d 00:20:50.220 05:11:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:50.220 05:11:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_16:0 00:20:50.220 05:11:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target16 Target16_alias lvs0/lbd_16:0 1:2 256 -d 00:20:50.220 05:11:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:50.220 05:11:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_17:0 00:20:50.220 05:11:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target17 Target17_alias lvs0/lbd_17:0 1:2 256 -d 00:20:50.478 05:11:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:50.478 05:11:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_18:0 00:20:50.478 05:11:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target18 Target18_alias lvs0/lbd_18:0 1:2 256 -d 00:20:50.738 05:11:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:50.738 05:11:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_19:0 00:20:50.738 05:11:50 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target19 Target19_alias lvs0/lbd_19:0 1:2 256 -d 00:20:50.996 05:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:50.996 05:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_20:0 00:20:50.996 05:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target20 Target20_alias lvs0/lbd_20:0 1:2 256 -d 00:20:51.261 05:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:51.261 05:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_21:0 00:20:51.261 05:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target21 Target21_alias lvs0/lbd_21:0 1:2 256 -d 00:20:51.520 05:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:51.520 05:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_22:0 00:20:51.520 05:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target22 Target22_alias lvs0/lbd_22:0 1:2 256 -d 00:20:51.778 05:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:51.778 05:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_23:0 00:20:51.778 05:11:51 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target23 Target23_alias lvs0/lbd_23:0 1:2 256 -d 00:20:52.036 05:11:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:52.036 05:11:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_24:0 00:20:52.036 05:11:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target24 Target24_alias lvs0/lbd_24:0 1:2 256 -d 00:20:52.294 05:11:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:52.294 05:11:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_25:0 00:20:52.294 05:11:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target25 Target25_alias lvs0/lbd_25:0 1:2 256 -d 00:20:52.553 05:11:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:52.553 05:11:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_26:0 00:20:52.553 05:11:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target26 Target26_alias lvs0/lbd_26:0 1:2 256 -d 00:20:52.819 05:11:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:52.819 05:11:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_27:0 00:20:52.819 05:11:52 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target27 Target27_alias lvs0/lbd_27:0 1:2 256 -d 00:20:53.089 05:11:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:53.089 05:11:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_28:0 00:20:53.089 05:11:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target28 Target28_alias lvs0/lbd_28:0 1:2 256 -d 00:20:53.347 05:11:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:53.347 05:11:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_29:0 00:20:53.347 05:11:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target29 Target29_alias lvs0/lbd_29:0 1:2 256 -d 00:20:53.347 05:11:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:53.347 05:11:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_30:0 00:20:53.347 05:11:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target30 Target30_alias lvs0/lbd_30:0 1:2 256 -d 00:20:53.605 05:11:53 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@69 -- # sleep 1 00:20:54.979 Logging into iSCSI target. 00:20:54.979 05:11:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@71 -- # echo 'Logging into iSCSI target.' 00:20:54.980 05:11:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@72 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target16 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target17 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target18 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target19 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target20 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target21 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target22 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target23 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target24 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target25 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target26 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target27 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target28 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target29 00:20:54.980 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target30 00:20:54.980 05:11:54 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@73 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:20:54.980 [2024-07-23 05:11:54.857922] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 [2024-07-23 05:11:54.864916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 [2024-07-23 05:11:54.887747] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 [2024-07-23 05:11:54.890300] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 [2024-07-23 05:11:54.928716] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 [2024-07-23 05:11:54.955812] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 [2024-07-23 05:11:54.977576] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 [2024-07-23 05:11:55.005232] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 [2024-07-23 05:11:55.043418] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 [2024-07-23 05:11:55.055457] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 [2024-07-23 05:11:55.095183] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 [2024-07-23 05:11:55.122151] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 [2024-07-23 05:11:55.158233] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:20:54.980 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:20:54.980 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, por[2024-07-23 05:11:55.179976] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.239 [2024-07-23 05:11:55.199046] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.239 [2024-07-23 05:11:55.220957] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.239 [2024-07-23 05:11:55.243361] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.239 [2024-07-23 05:11:55.267399] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.239 [2024-07-23 05:11:55.285596] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.239 [2024-07-23 05:11:55.315529] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.239 [2024-07-23 05:11:55.340011] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.239 [2024-07-23 05:11:55.360113] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.239 [2024-07-23 05:11:55.395452] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.239 [2024-07-23 05:11:55.429959] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.239 [2024-07-23 05:11:55.455611] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.497 [2024-07-23 05:11:55.475975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.497 [2024-07-23 05:11:55.517025] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.497 [2024-07-23 05:11:55.544919] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.497 tal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:20:55.497 Login to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:20:55.497 05:11:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@74 -- # waitforiscsidevices 30 00:20:55.497 05:11:55 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@116 -- # local num=30 00:20:55.497 05:11:55 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:55.497 05:11:55 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:55.497 05:11:55 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:55.497 05:11:55 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:55.497 [2024-07-23 05:11:55.594817] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.497 [2024-07-23 05:11:55.595939] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:55.497 05:11:55 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # n=30 00:20:55.497 05:11:55 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@120 -- # '[' 30 -ne 30 ']' 00:20:55.497 05:11:55 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@123 -- # return 0 00:20:55.497 Running FIO 00:20:55.497 05:11:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@76 -- # echo 'Running FIO' 00:20:55.497 05:11:55 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 64 -t randrw -r 5 00:20:55.755 [global] 00:20:55.755 thread=1 00:20:55.755 invalidate=1 00:20:55.755 rw=randrw 00:20:55.755 time_based=1 00:20:55.755 runtime=5 00:20:55.755 ioengine=libaio 00:20:55.755 direct=1 00:20:55.755 bs=131072 00:20:55.755 iodepth=64 00:20:55.755 norandommap=1 00:20:55.755 numjobs=1 00:20:55.755 00:20:55.755 [job0] 00:20:55.755 filename=/dev/sda 00:20:55.755 [job1] 00:20:55.755 filename=/dev/sdb 00:20:55.755 [job2] 00:20:55.755 filename=/dev/sdc 00:20:55.755 [job3] 00:20:55.755 filename=/dev/sdd 00:20:55.755 [job4] 00:20:55.755 filename=/dev/sde 00:20:55.755 [job5] 00:20:55.755 filename=/dev/sdf 00:20:55.755 [job6] 00:20:55.755 filename=/dev/sdg 00:20:55.755 [job7] 00:20:55.755 filename=/dev/sdh 00:20:55.755 [job8] 00:20:55.755 filename=/dev/sdi 00:20:55.755 [job9] 00:20:55.755 filename=/dev/sdj 00:20:55.755 [job10] 00:20:55.755 filename=/dev/sdk 00:20:55.755 [job11] 00:20:55.755 filename=/dev/sdl 00:20:55.755 [job12] 00:20:55.755 filename=/dev/sdm 00:20:55.755 [job13] 00:20:55.755 filename=/dev/sdn 00:20:55.755 [job14] 00:20:55.755 filename=/dev/sdo 00:20:55.755 [job15] 00:20:55.755 filename=/dev/sdp 00:20:55.755 [job16] 00:20:55.755 filename=/dev/sdq 00:20:55.755 [job17] 00:20:55.755 filename=/dev/sdr 00:20:55.755 [job18] 00:20:55.755 filename=/dev/sds 00:20:55.755 [job19] 00:20:55.755 filename=/dev/sdt 00:20:55.755 [job20] 00:20:55.755 filename=/dev/sdu 00:20:55.755 [job21] 00:20:55.755 filename=/dev/sdv 00:20:55.755 [job22] 00:20:55.755 filename=/dev/sdw 00:20:55.755 [job23] 00:20:55.755 filename=/dev/sdx 00:20:55.755 [job24] 00:20:55.755 filename=/dev/sdy 00:20:55.755 [job25] 00:20:55.755 filename=/dev/sdz 00:20:55.755 [job26] 00:20:55.755 filename=/dev/sdaa 00:20:55.755 [job27] 00:20:55.755 filename=/dev/sdab 00:20:55.755 [job28] 00:20:55.755 filename=/dev/sdac 00:20:55.755 [job29] 00:20:55.755 filename=/dev/sdad 00:20:56.012 queue_depth set to 113 (sda) 00:20:56.270 queue_depth set to 113 (sdb) 00:20:56.270 queue_depth set to 113 (sdc) 00:20:56.270 queue_depth set to 113 (sdd) 00:20:56.270 queue_depth set to 113 (sde) 00:20:56.270 queue_depth set to 113 (sdf) 00:20:56.270 queue_depth set to 113 (sdg) 00:20:56.270 queue_depth set to 113 (sdh) 00:20:56.270 queue_depth set to 113 (sdi) 00:20:56.270 queue_depth set to 113 (sdj) 00:20:56.270 queue_depth set to 113 (sdk) 00:20:56.270 queue_depth set to 113 (sdl) 00:20:56.528 queue_depth set to 113 (sdm) 00:20:56.528 queue_depth set to 113 (sdn) 00:20:56.528 queue_depth set to 113 (sdo) 00:20:56.528 queue_depth set to 113 (sdp) 00:20:56.528 queue_depth set to 113 (sdq) 00:20:56.528 queue_depth set to 113 (sdr) 00:20:56.528 queue_depth set to 113 (sds) 00:20:56.528 queue_depth set to 113 (sdt) 00:20:56.528 queue_depth set to 113 (sdu) 00:20:56.528 queue_depth set to 113 (sdv) 00:20:56.528 queue_depth set to 113 (sdw) 00:20:56.786 queue_depth set to 113 (sdx) 00:20:56.786 queue_depth set to 113 (sdy) 00:20:56.786 queue_depth set to 113 (sdz) 00:20:56.786 queue_depth set to 113 (sdaa) 00:20:56.786 queue_depth set to 113 (sdab) 00:20:56.786 queue_depth set to 113 (sdac) 00:20:56.786 queue_depth set to 113 (sdad) 00:20:57.045 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job16: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job17: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job18: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job19: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job20: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job21: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job22: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job23: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job24: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job25: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job26: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job27: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job28: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 job29: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:20:57.045 fio-3.35 00:20:57.045 Starting 30 threads 00:20:57.045 [2024-07-23 05:11:57.053294] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.055477] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.057402] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.059256] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.061147] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.062970] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.064764] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.066651] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.068355] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.070109] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.071845] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.073535] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.075189] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.076893] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.078660] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.080413] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.082175] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.083929] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.085764] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.087430] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.089171] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.090950] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.092743] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.094492] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.096391] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.098131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.099870] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.101692] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.103386] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:57.046 [2024-07-23 05:11:57.105105] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.078933] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.092850] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.095443] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.098159] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.100290] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.102401] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.104675] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.106713] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.108709] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.110893] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.112939] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.115265] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.117359] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.119712] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.121800] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.123900] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.126027] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.128087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.133806] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.136098] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.138279] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 [2024-07-23 05:12:03.140392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.631 00:21:03.631 job0: (groupid=0, jobs=1): err= 0: pid=91658: Tue Jul 23 05:12:03 2024 00:21:03.631 read: IOPS=75, BW=9628KiB/s (9859kB/s)(50.9MiB/5411msec) 00:21:03.631 slat (nsec): min=8600, max=90580, avg=28494.66, stdev=15374.93 00:21:03.631 clat (msec): min=34, max=443, avg=64.54, stdev=45.57 00:21:03.631 lat (msec): min=34, max=443, avg=64.57, stdev=45.57 00:21:03.631 clat percentiles (msec): 00:21:03.631 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 48], 00:21:03.631 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 51], 00:21:03.631 | 70.00th=[ 52], 80.00th=[ 55], 90.00th=[ 107], 95.00th=[ 169], 00:21:03.631 | 99.00th=[ 224], 99.50th=[ 230], 99.90th=[ 443], 99.95th=[ 443], 00:21:03.631 | 99.99th=[ 443] 00:21:03.631 bw ( KiB/s): min= 7168, max=14592, per=3.43%, avg=10368.00, stdev=2594.61, samples=10 00:21:03.631 iops : min= 56, max= 114, avg=81.00, stdev=20.27, samples=10 00:21:03.631 write: IOPS=79, BW=9.98MiB/s (10.5MB/s)(54.0MiB/5411msec); 0 zone resets 00:21:03.631 slat (usec): min=12, max=118, avg=34.39, stdev=16.79 00:21:03.631 clat (msec): min=237, max=1121, avg=739.67, stdev=110.06 00:21:03.631 lat (msec): min=237, max=1121, avg=739.70, stdev=110.06 00:21:03.631 clat percentiles (msec): 00:21:03.631 | 1.00th=[ 380], 5.00th=[ 481], 10.00th=[ 651], 20.00th=[ 726], 00:21:03.631 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.631 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 785], 95.00th=[ 869], 00:21:03.631 | 99.00th=[ 1070], 99.50th=[ 1083], 99.90th=[ 1116], 99.95th=[ 1116], 00:21:03.631 | 99.99th=[ 1116] 00:21:03.631 bw ( KiB/s): min= 2816, max=10496, per=3.14%, avg=9497.60, stdev=2360.05, samples=10 00:21:03.631 iops : min= 22, max= 82, avg=74.20, stdev=18.44, samples=10 00:21:03.631 lat (msec) : 50=22.17%, 100=20.98%, 250=5.24%, 500=3.10%, 750=20.62% 00:21:03.631 lat (msec) : 1000=26.34%, 2000=1.55% 00:21:03.631 cpu : usr=0.15%, sys=0.55%, ctx=506, majf=0, minf=1 00:21:03.631 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:21:03.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.631 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.631 issued rwts: total=407,432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.631 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.631 job1: (groupid=0, jobs=1): err= 0: pid=91659: Tue Jul 23 05:12:03 2024 00:21:03.631 read: IOPS=69, BW=8872KiB/s (9085kB/s)(46.9MiB/5410msec) 00:21:03.631 slat (usec): min=8, max=260, avg=31.22, stdev=24.66 00:21:03.631 clat (msec): min=29, max=444, avg=68.40, stdev=52.23 00:21:03.631 lat (msec): min=29, max=445, avg=68.43, stdev=52.23 00:21:03.631 clat percentiles (msec): 00:21:03.631 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 49], 00:21:03.631 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 51], 00:21:03.631 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 142], 95.00th=[ 203], 00:21:03.631 | 99.00th=[ 239], 99.50th=[ 447], 99.90th=[ 447], 99.95th=[ 447], 00:21:03.631 | 99.99th=[ 447] 00:21:03.631 bw ( KiB/s): min= 6656, max=14108, per=3.16%, avg=9549.10, stdev=2588.43, samples=10 00:21:03.631 iops : min= 52, max= 110, avg=74.50, stdev=20.10, samples=10 00:21:03.631 write: IOPS=79, BW=9.96MiB/s (10.4MB/s)(53.9MiB/5410msec); 0 zone resets 00:21:03.631 slat (usec): min=13, max=1211, avg=44.08, stdev=84.05 00:21:03.631 clat (msec): min=240, max=1135, avg=742.30, stdev=109.47 00:21:03.631 lat (msec): min=241, max=1135, avg=742.35, stdev=109.45 00:21:03.631 clat percentiles (msec): 00:21:03.631 | 1.00th=[ 380], 5.00th=[ 523], 10.00th=[ 659], 20.00th=[ 718], 00:21:03.631 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.631 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 802], 95.00th=[ 885], 00:21:03.631 | 99.00th=[ 1099], 99.50th=[ 1116], 99.90th=[ 1133], 99.95th=[ 1133], 00:21:03.631 | 99.99th=[ 1133] 00:21:03.631 bw ( KiB/s): min= 2821, max=10496, per=3.13%, avg=9470.40, stdev=2349.90, samples=10 00:21:03.631 iops : min= 22, max= 82, avg=73.90, stdev=18.33, samples=10 00:21:03.631 lat (msec) : 50=20.47%, 100=20.22%, 250=5.58%, 500=2.36%, 750=25.06% 00:21:03.631 lat (msec) : 1000=24.69%, 2000=1.61% 00:21:03.631 cpu : usr=0.24%, sys=0.46%, ctx=522, majf=0, minf=1 00:21:03.631 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:21:03.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.631 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.631 issued rwts: total=375,431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.631 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.631 job2: (groupid=0, jobs=1): err= 0: pid=91697: Tue Jul 23 05:12:03 2024 00:21:03.631 read: IOPS=82, BW=10.3MiB/s (10.8MB/s)(55.8MiB/5420msec) 00:21:03.631 slat (usec): min=10, max=129, avg=31.68, stdev=16.28 00:21:03.631 clat (msec): min=36, max=443, avg=65.70, stdev=55.19 00:21:03.631 lat (msec): min=36, max=443, avg=65.73, stdev=55.19 00:21:03.631 clat percentiles (msec): 00:21:03.631 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 49], 00:21:03.631 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.631 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 86], 95.00th=[ 186], 00:21:03.631 | 99.00th=[ 305], 99.50th=[ 430], 99.90th=[ 443], 99.95th=[ 443], 00:21:03.631 | 99.99th=[ 443] 00:21:03.631 bw ( KiB/s): min= 8704, max=14336, per=3.73%, avg=11289.60, stdev=2010.88, samples=10 00:21:03.631 iops : min= 68, max= 112, avg=88.10, stdev=15.65, samples=10 00:21:03.631 write: IOPS=78, BW=9.87MiB/s (10.3MB/s)(53.5MiB/5420msec); 0 zone resets 00:21:03.631 slat (usec): min=11, max=110, avg=34.50, stdev=15.74 00:21:03.631 clat (msec): min=261, max=1147, avg=740.82, stdev=106.52 00:21:03.631 lat (msec): min=261, max=1147, avg=740.85, stdev=106.52 00:21:03.631 clat percentiles (msec): 00:21:03.631 | 1.00th=[ 397], 5.00th=[ 542], 10.00th=[ 659], 20.00th=[ 709], 00:21:03.631 | 30.00th=[ 726], 40.00th=[ 743], 50.00th=[ 743], 60.00th=[ 760], 00:21:03.631 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 793], 95.00th=[ 911], 00:21:03.631 | 99.00th=[ 1116], 99.50th=[ 1133], 99.90th=[ 1150], 99.95th=[ 1150], 00:21:03.631 | 99.99th=[ 1150] 00:21:03.631 bw ( KiB/s): min= 2565, max=10752, per=3.12%, avg=9444.80, stdev=2436.63, samples=10 00:21:03.631 iops : min= 20, max= 84, avg=73.70, stdev=19.02, samples=10 00:21:03.631 lat (msec) : 50=23.00%, 100=23.46%, 250=3.55%, 500=2.75%, 750=25.17% 00:21:03.631 lat (msec) : 1000=20.48%, 2000=1.60% 00:21:03.632 cpu : usr=0.11%, sys=0.65%, ctx=487, majf=0, minf=1 00:21:03.632 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.8% 00:21:03.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.632 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.632 issued rwts: total=446,428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.632 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.632 job3: (groupid=0, jobs=1): err= 0: pid=91709: Tue Jul 23 05:12:03 2024 00:21:03.632 read: IOPS=77, BW=9912KiB/s (10.1MB/s)(52.4MiB/5411msec) 00:21:03.632 slat (usec): min=6, max=118, avg=30.94, stdev=15.24 00:21:03.632 clat (msec): min=29, max=442, avg=65.26, stdev=47.22 00:21:03.632 lat (msec): min=29, max=442, avg=65.29, stdev=47.22 00:21:03.632 clat percentiles (msec): 00:21:03.632 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 49], 00:21:03.632 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.632 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 102], 95.00th=[ 192], 00:21:03.632 | 99.00th=[ 232], 99.50th=[ 234], 99.90th=[ 443], 99.95th=[ 443], 00:21:03.632 | 99.99th=[ 443] 00:21:03.632 bw ( KiB/s): min= 6656, max=14364, per=3.53%, avg=10676.30, stdev=2216.26, samples=10 00:21:03.632 iops : min= 52, max= 112, avg=83.30, stdev=17.38, samples=10 00:21:03.632 write: IOPS=79, BW=9.96MiB/s (10.4MB/s)(53.9MiB/5411msec); 0 zone resets 00:21:03.632 slat (usec): min=11, max=269, avg=35.82, stdev=21.78 00:21:03.632 clat (msec): min=239, max=1119, avg=738.87, stdev=108.53 00:21:03.632 lat (msec): min=239, max=1119, avg=738.90, stdev=108.53 00:21:03.632 clat percentiles (msec): 00:21:03.632 | 1.00th=[ 376], 5.00th=[ 518], 10.00th=[ 651], 20.00th=[ 718], 00:21:03.632 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.632 | 70.00th=[ 760], 80.00th=[ 776], 90.00th=[ 785], 95.00th=[ 885], 00:21:03.632 | 99.00th=[ 1099], 99.50th=[ 1116], 99.90th=[ 1116], 99.95th=[ 1116], 00:21:03.632 | 99.99th=[ 1116] 00:21:03.632 bw ( KiB/s): min= 2821, max=10496, per=3.13%, avg=9470.40, stdev=2347.05, samples=10 00:21:03.632 iops : min= 22, max= 82, avg=73.90, stdev=18.32, samples=10 00:21:03.632 lat (msec) : 50=20.35%, 100=23.88%, 250=4.94%, 500=2.47%, 750=24.24% 00:21:03.632 lat (msec) : 1000=22.71%, 2000=1.41% 00:21:03.632 cpu : usr=0.11%, sys=0.55%, ctx=502, majf=0, minf=1 00:21:03.632 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:21:03.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.632 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.632 issued rwts: total=419,431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.632 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.632 job4: (groupid=0, jobs=1): err= 0: pid=91710: Tue Jul 23 05:12:03 2024 00:21:03.632 read: IOPS=76, BW=9806KiB/s (10.0MB/s)(52.0MiB/5430msec) 00:21:03.632 slat (nsec): min=8453, max=76816, avg=29605.44, stdev=13845.29 00:21:03.632 clat (msec): min=23, max=476, avg=68.03, stdev=52.19 00:21:03.632 lat (msec): min=23, max=476, avg=68.05, stdev=52.19 00:21:03.632 clat percentiles (msec): 00:21:03.632 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 49], 00:21:03.632 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.632 | 70.00th=[ 52], 80.00th=[ 56], 90.00th=[ 129], 95.00th=[ 178], 00:21:03.632 | 99.00th=[ 292], 99.50th=[ 439], 99.90th=[ 477], 99.95th=[ 477], 00:21:03.632 | 99.99th=[ 477] 00:21:03.632 bw ( KiB/s): min= 7936, max=18176, per=3.50%, avg=10572.80, stdev=2973.35, samples=10 00:21:03.632 iops : min= 62, max= 142, avg=82.60, stdev=23.23, samples=10 00:21:03.632 write: IOPS=78, BW=9.85MiB/s (10.3MB/s)(53.5MiB/5430msec); 0 zone resets 00:21:03.632 slat (usec): min=13, max=9900, avg=57.47, stdev=477.10 00:21:03.632 clat (msec): min=247, max=1177, avg=743.16, stdev=108.74 00:21:03.632 lat (msec): min=257, max=1177, avg=743.22, stdev=108.64 00:21:03.632 clat percentiles (msec): 00:21:03.632 | 1.00th=[ 388], 5.00th=[ 527], 10.00th=[ 659], 20.00th=[ 718], 00:21:03.632 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.632 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 785], 95.00th=[ 902], 00:21:03.632 | 99.00th=[ 1116], 99.50th=[ 1167], 99.90th=[ 1183], 99.95th=[ 1183], 00:21:03.632 | 99.99th=[ 1183] 00:21:03.632 bw ( KiB/s): min= 2560, max=10496, per=3.11%, avg=9420.80, stdev=2425.03, samples=10 00:21:03.632 iops : min= 20, max= 82, avg=73.60, stdev=18.95, samples=10 00:21:03.632 lat (msec) : 50=21.56%, 100=20.85%, 250=6.28%, 500=2.84%, 750=21.33% 00:21:03.632 lat (msec) : 1000=25.59%, 2000=1.54% 00:21:03.632 cpu : usr=0.24%, sys=0.48%, ctx=498, majf=0, minf=1 00:21:03.632 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.5% 00:21:03.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.632 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.632 issued rwts: total=416,428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.632 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.632 job5: (groupid=0, jobs=1): err= 0: pid=91712: Tue Jul 23 05:12:03 2024 00:21:03.632 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(55.6MiB/5432msec) 00:21:03.632 slat (nsec): min=8102, max=97793, avg=31136.56, stdev=15560.89 00:21:03.632 clat (msec): min=23, max=458, avg=61.83, stdev=42.18 00:21:03.632 lat (msec): min=23, max=458, avg=61.86, stdev=42.18 00:21:03.632 clat percentiles (msec): 00:21:03.632 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 49], 00:21:03.632 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 51], 00:21:03.632 | 70.00th=[ 52], 80.00th=[ 53], 90.00th=[ 80], 95.00th=[ 153], 00:21:03.632 | 99.00th=[ 255], 99.50th=[ 268], 99.90th=[ 460], 99.95th=[ 460], 00:21:03.632 | 99.99th=[ 460] 00:21:03.632 bw ( KiB/s): min= 7936, max=16128, per=3.76%, avg=11366.70, stdev=2484.70, samples=10 00:21:03.632 iops : min= 62, max= 126, avg=88.70, stdev=19.40, samples=10 00:21:03.632 write: IOPS=79, BW=9.94MiB/s (10.4MB/s)(54.0MiB/5432msec); 0 zone resets 00:21:03.632 slat (usec): min=8, max=135, avg=33.82, stdev=16.50 00:21:03.632 clat (msec): min=253, max=1149, avg=739.79, stdev=108.78 00:21:03.632 lat (msec): min=253, max=1149, avg=739.83, stdev=108.78 00:21:03.632 clat percentiles (msec): 00:21:03.632 | 1.00th=[ 397], 5.00th=[ 523], 10.00th=[ 659], 20.00th=[ 726], 00:21:03.632 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 743], 60.00th=[ 751], 00:21:03.632 | 70.00th=[ 760], 80.00th=[ 768], 90.00th=[ 802], 95.00th=[ 894], 00:21:03.632 | 99.00th=[ 1083], 99.50th=[ 1150], 99.90th=[ 1150], 99.95th=[ 1150], 00:21:03.632 | 99.99th=[ 1150] 00:21:03.632 bw ( KiB/s): min= 2565, max=10496, per=3.12%, avg=9444.90, stdev=2430.92, samples=10 00:21:03.632 iops : min= 20, max= 82, avg=73.70, stdev=18.99, samples=10 00:21:03.632 lat (msec) : 50=22.46%, 100=24.06%, 250=3.65%, 500=2.74%, 750=26.80% 00:21:03.632 lat (msec) : 1000=18.93%, 2000=1.37% 00:21:03.632 cpu : usr=0.29%, sys=0.46%, ctx=495, majf=0, minf=1 00:21:03.632 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.8% 00:21:03.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.632 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.632 issued rwts: total=445,432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.632 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.632 job6: (groupid=0, jobs=1): err= 0: pid=91719: Tue Jul 23 05:12:03 2024 00:21:03.632 read: IOPS=78, BW=9.81MiB/s (10.3MB/s)(53.1MiB/5414msec) 00:21:03.632 slat (nsec): min=6797, max=99330, avg=27408.41, stdev=13805.69 00:21:03.632 clat (msec): min=35, max=429, avg=63.71, stdev=44.82 00:21:03.632 lat (msec): min=35, max=429, avg=63.74, stdev=44.82 00:21:03.632 clat percentiles (msec): 00:21:03.632 | 1.00th=[ 37], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 49], 00:21:03.632 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.632 | 70.00th=[ 52], 80.00th=[ 55], 90.00th=[ 95], 95.00th=[ 159], 00:21:03.632 | 99.00th=[ 230], 99.50th=[ 239], 99.90th=[ 430], 99.95th=[ 430], 00:21:03.632 | 99.99th=[ 430] 00:21:03.632 bw ( KiB/s): min= 6131, max=14080, per=3.57%, avg=10804.60, stdev=3004.58, samples=10 00:21:03.632 iops : min= 47, max= 110, avg=84.30, stdev=23.61, samples=10 00:21:03.632 write: IOPS=79, BW=9.95MiB/s (10.4MB/s)(53.9MiB/5414msec); 0 zone resets 00:21:03.632 slat (usec): min=11, max=3556, avg=40.27, stdev=170.62 00:21:03.632 clat (msec): min=242, max=1157, avg=739.92, stdev=108.27 00:21:03.632 lat (msec): min=242, max=1157, avg=739.97, stdev=108.27 00:21:03.632 clat percentiles (msec): 00:21:03.632 | 1.00th=[ 380], 5.00th=[ 518], 10.00th=[ 684], 20.00th=[ 718], 00:21:03.632 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.632 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 785], 95.00th=[ 877], 00:21:03.632 | 99.00th=[ 1099], 99.50th=[ 1116], 99.90th=[ 1150], 99.95th=[ 1150], 00:21:03.632 | 99.99th=[ 1150] 00:21:03.632 bw ( KiB/s): min= 2821, max=10496, per=3.13%, avg=9470.40, stdev=2349.90, samples=10 00:21:03.632 iops : min= 22, max= 82, avg=73.90, stdev=18.33, samples=10 00:21:03.632 lat (msec) : 50=22.20%, 100=22.55%, 250=4.79%, 500=2.34%, 750=22.31% 00:21:03.632 lat (msec) : 1000=24.42%, 2000=1.40% 00:21:03.632 cpu : usr=0.09%, sys=0.55%, ctx=495, majf=0, minf=1 00:21:03.632 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.6% 00:21:03.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.632 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.632 issued rwts: total=425,431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.632 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.632 job7: (groupid=0, jobs=1): err= 0: pid=91731: Tue Jul 23 05:12:03 2024 00:21:03.632 read: IOPS=72, BW=9254KiB/s (9476kB/s)(49.0MiB/5422msec) 00:21:03.632 slat (nsec): min=10393, max=78704, avg=29641.35, stdev=13136.26 00:21:03.632 clat (msec): min=35, max=465, avg=67.68, stdev=51.53 00:21:03.632 lat (msec): min=35, max=465, avg=67.71, stdev=51.53 00:21:03.632 clat percentiles (msec): 00:21:03.632 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:21:03.632 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.632 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 132], 95.00th=[ 180], 00:21:03.633 | 99.00th=[ 226], 99.50th=[ 451], 99.90th=[ 468], 99.95th=[ 468], 00:21:03.633 | 99.99th=[ 468] 00:21:03.633 bw ( KiB/s): min= 6400, max=14848, per=3.29%, avg=9956.30, stdev=2396.52, samples=10 00:21:03.633 iops : min= 50, max= 116, avg=77.70, stdev=18.71, samples=10 00:21:03.633 write: IOPS=79, BW=9.89MiB/s (10.4MB/s)(53.6MiB/5422msec); 0 zone resets 00:21:03.633 slat (usec): min=12, max=1527, avg=39.01, stdev=73.70 00:21:03.633 clat (msec): min=247, max=1143, avg=745.54, stdev=105.12 00:21:03.633 lat (msec): min=249, max=1143, avg=745.58, stdev=105.11 00:21:03.633 clat percentiles (msec): 00:21:03.633 | 1.00th=[ 388], 5.00th=[ 550], 10.00th=[ 684], 20.00th=[ 726], 00:21:03.633 | 30.00th=[ 743], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.633 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 793], 95.00th=[ 953], 00:21:03.633 | 99.00th=[ 1083], 99.50th=[ 1116], 99.90th=[ 1150], 99.95th=[ 1150], 00:21:03.633 | 99.99th=[ 1150] 00:21:03.633 bw ( KiB/s): min= 2816, max=10496, per=3.12%, avg=9444.30, stdev=2343.54, samples=10 00:21:03.633 iops : min= 22, max= 82, avg=73.70, stdev=18.27, samples=10 00:21:03.633 lat (msec) : 50=19.00%, 100=22.78%, 250=5.72%, 500=2.31%, 750=21.32% 00:21:03.633 lat (msec) : 1000=27.41%, 2000=1.46% 00:21:03.633 cpu : usr=0.26%, sys=0.46%, ctx=479, majf=0, minf=1 00:21:03.633 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.3% 00:21:03.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.633 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.633 issued rwts: total=392,429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.633 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.633 job8: (groupid=0, jobs=1): err= 0: pid=91749: Tue Jul 23 05:12:03 2024 00:21:03.633 read: IOPS=70, BW=9015KiB/s (9231kB/s)(47.9MiB/5438msec) 00:21:03.633 slat (usec): min=8, max=196, avg=26.30, stdev=19.78 00:21:03.633 clat (msec): min=4, max=471, avg=63.48, stdev=56.60 00:21:03.633 lat (msec): min=4, max=471, avg=63.51, stdev=56.60 00:21:03.633 clat percentiles (msec): 00:21:03.633 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 49], 00:21:03.633 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 51], 00:21:03.633 | 70.00th=[ 52], 80.00th=[ 53], 90.00th=[ 81], 95.00th=[ 188], 00:21:03.633 | 99.00th=[ 284], 99.50th=[ 460], 99.90th=[ 472], 99.95th=[ 472], 00:21:03.633 | 99.99th=[ 472] 00:21:03.633 bw ( KiB/s): min= 5888, max=15872, per=3.22%, avg=9728.00, stdev=2719.98, samples=10 00:21:03.633 iops : min= 46, max= 124, avg=76.00, stdev=21.25, samples=10 00:21:03.633 write: IOPS=79, BW=9.88MiB/s (10.4MB/s)(53.8MiB/5438msec); 0 zone resets 00:21:03.633 slat (usec): min=12, max=314, avg=32.53, stdev=21.87 00:21:03.633 clat (msec): min=126, max=1184, avg=751.52, stdev=115.62 00:21:03.633 lat (msec): min=126, max=1184, avg=751.55, stdev=115.62 00:21:03.633 clat percentiles (msec): 00:21:03.633 | 1.00th=[ 313], 5.00th=[ 535], 10.00th=[ 676], 20.00th=[ 726], 00:21:03.633 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.633 | 70.00th=[ 776], 80.00th=[ 793], 90.00th=[ 818], 95.00th=[ 911], 00:21:03.633 | 99.00th=[ 1116], 99.50th=[ 1167], 99.90th=[ 1183], 99.95th=[ 1183], 00:21:03.633 | 99.99th=[ 1183] 00:21:03.633 bw ( KiB/s): min= 2816, max=10496, per=3.13%, avg=9472.00, stdev=2355.57, samples=10 00:21:03.633 iops : min= 22, max= 82, avg=74.00, stdev=18.40, samples=10 00:21:03.633 lat (msec) : 10=0.25%, 20=1.11%, 50=20.91%, 100=21.16%, 250=2.21% 00:21:03.633 lat (msec) : 500=3.81%, 750=22.26%, 1000=26.45%, 2000=1.85% 00:21:03.633 cpu : usr=0.17%, sys=0.39%, ctx=518, majf=0, minf=1 00:21:03.633 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:21:03.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.633 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.633 issued rwts: total=383,430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.633 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.633 job9: (groupid=0, jobs=1): err= 0: pid=91803: Tue Jul 23 05:12:03 2024 00:21:03.633 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(58.6MiB/5450msec) 00:21:03.633 slat (usec): min=6, max=394, avg=30.65, stdev=27.72 00:21:03.633 clat (usec): min=1055, max=452649, avg=63652.41, stdev=53064.43 00:21:03.633 lat (usec): min=1069, max=452699, avg=63683.06, stdev=53063.16 00:21:03.633 clat percentiles (msec): 00:21:03.633 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 37], 20.00th=[ 48], 00:21:03.633 | 30.00th=[ 49], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 51], 00:21:03.633 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 127], 95.00th=[ 180], 00:21:03.633 | 99.00th=[ 266], 99.50th=[ 268], 99.90th=[ 451], 99.95th=[ 451], 00:21:03.633 | 99.99th=[ 451] 00:21:03.633 bw ( KiB/s): min= 7168, max=25856, per=3.95%, avg=11955.20, stdev=5434.67, samples=10 00:21:03.633 iops : min= 56, max= 202, avg=93.40, stdev=42.46, samples=10 00:21:03.633 write: IOPS=80, BW=10.0MiB/s (10.5MB/s)(54.8MiB/5450msec); 0 zone resets 00:21:03.633 slat (usec): min=8, max=153, avg=33.93, stdev=15.94 00:21:03.633 clat (msec): min=4, max=1203, avg=726.92, stdev=145.94 00:21:03.633 lat (msec): min=4, max=1203, avg=726.96, stdev=145.95 00:21:03.633 clat percentiles (msec): 00:21:03.633 | 1.00th=[ 5], 5.00th=[ 481], 10.00th=[ 625], 20.00th=[ 701], 00:21:03.633 | 30.00th=[ 726], 40.00th=[ 735], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.633 | 70.00th=[ 760], 80.00th=[ 776], 90.00th=[ 793], 95.00th=[ 919], 00:21:03.633 | 99.00th=[ 1116], 99.50th=[ 1150], 99.90th=[ 1200], 99.95th=[ 1200], 00:21:03.633 | 99.99th=[ 1200] 00:21:03.633 bw ( KiB/s): min= 4352, max=10496, per=3.18%, avg=9625.60, stdev=1866.44, samples=10 00:21:03.633 iops : min= 34, max= 82, avg=75.20, stdev=14.58, samples=10 00:21:03.633 lat (msec) : 2=0.22%, 10=2.98%, 20=1.21%, 50=20.84%, 100=21.28% 00:21:03.633 lat (msec) : 250=4.41%, 500=3.31%, 750=23.48%, 1000=20.95%, 2000=1.32% 00:21:03.633 cpu : usr=0.17%, sys=0.50%, ctx=563, majf=0, minf=1 00:21:03.633 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:21:03.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.633 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.633 issued rwts: total=469,438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.633 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.633 job10: (groupid=0, jobs=1): err= 0: pid=91842: Tue Jul 23 05:12:03 2024 00:21:03.633 read: IOPS=84, BW=10.5MiB/s (11.0MB/s)(56.9MiB/5402msec) 00:21:03.633 slat (usec): min=8, max=865, avg=45.39, stdev=92.62 00:21:03.633 clat (msec): min=34, max=446, avg=68.47, stdev=57.58 00:21:03.633 lat (msec): min=34, max=446, avg=68.52, stdev=57.58 00:21:03.633 clat percentiles (msec): 00:21:03.633 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 48], 00:21:03.633 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 51], 00:21:03.633 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 138], 95.00th=[ 205], 00:21:03.633 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 447], 99.95th=[ 447], 00:21:03.633 | 99.99th=[ 447] 00:21:03.633 bw ( KiB/s): min= 7424, max=16384, per=3.81%, avg=11520.00, stdev=2985.45, samples=10 00:21:03.633 iops : min= 58, max= 128, avg=90.00, stdev=23.32, samples=10 00:21:03.633 write: IOPS=79, BW=9.90MiB/s (10.4MB/s)(53.5MiB/5402msec); 0 zone resets 00:21:03.633 slat (usec): min=12, max=973, avg=49.49, stdev=92.87 00:21:03.633 clat (msec): min=251, max=1149, avg=733.75, stdev=108.05 00:21:03.633 lat (msec): min=251, max=1149, avg=733.80, stdev=108.06 00:21:03.633 clat percentiles (msec): 00:21:03.633 | 1.00th=[ 384], 5.00th=[ 535], 10.00th=[ 651], 20.00th=[ 709], 00:21:03.633 | 30.00th=[ 726], 40.00th=[ 735], 50.00th=[ 743], 60.00th=[ 751], 00:21:03.633 | 70.00th=[ 760], 80.00th=[ 768], 90.00th=[ 785], 95.00th=[ 919], 00:21:03.633 | 99.00th=[ 1099], 99.50th=[ 1133], 99.90th=[ 1150], 99.95th=[ 1150], 00:21:03.633 | 99.99th=[ 1150] 00:21:03.633 bw ( KiB/s): min= 2816, max=10752, per=3.13%, avg=9472.00, stdev=2355.57, samples=10 00:21:03.633 iops : min= 22, max= 84, avg=74.00, stdev=18.40, samples=10 00:21:03.633 lat (msec) : 50=24.46%, 100=21.29%, 250=5.21%, 500=2.38%, 750=28.31% 00:21:03.633 lat (msec) : 1000=16.87%, 2000=1.47% 00:21:03.633 cpu : usr=0.11%, sys=0.56%, ctx=645, majf=0, minf=1 00:21:03.633 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:21:03.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.633 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.633 issued rwts: total=455,428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.633 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.633 job11: (groupid=0, jobs=1): err= 0: pid=91867: Tue Jul 23 05:12:03 2024 00:21:03.633 read: IOPS=76, BW=9774KiB/s (10.0MB/s)(51.8MiB/5422msec) 00:21:03.633 slat (nsec): min=8542, max=81453, avg=28829.13, stdev=13264.15 00:21:03.633 clat (msec): min=35, max=452, avg=71.29, stdev=56.78 00:21:03.633 lat (msec): min=36, max=452, avg=71.32, stdev=56.78 00:21:03.633 clat percentiles (msec): 00:21:03.633 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:21:03.633 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.633 | 70.00th=[ 53], 80.00th=[ 56], 90.00th=[ 155], 95.00th=[ 194], 00:21:03.633 | 99.00th=[ 230], 99.50th=[ 430], 99.90th=[ 451], 99.95th=[ 451], 00:21:03.633 | 99.99th=[ 451] 00:21:03.633 bw ( KiB/s): min= 4864, max=17920, per=3.47%, avg=10493.80, stdev=3854.04, samples=10 00:21:03.633 iops : min= 38, max= 140, avg=81.90, stdev=30.11, samples=10 00:21:03.633 write: IOPS=79, BW=9.89MiB/s (10.4MB/s)(53.6MiB/5422msec); 0 zone resets 00:21:03.633 slat (usec): min=13, max=6243, avg=47.83, stdev=300.13 00:21:03.633 clat (msec): min=243, max=1166, avg=737.90, stdev=111.36 00:21:03.633 lat (msec): min=250, max=1166, avg=737.95, stdev=111.30 00:21:03.633 clat percentiles (msec): 00:21:03.633 | 1.00th=[ 384], 5.00th=[ 531], 10.00th=[ 642], 20.00th=[ 709], 00:21:03.633 | 30.00th=[ 726], 40.00th=[ 735], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.633 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 802], 95.00th=[ 919], 00:21:03.633 | 99.00th=[ 1099], 99.50th=[ 1133], 99.90th=[ 1167], 99.95th=[ 1167], 00:21:03.633 | 99.99th=[ 1167] 00:21:03.633 bw ( KiB/s): min= 2816, max=10496, per=3.13%, avg=9469.90, stdev=2354.57, samples=10 00:21:03.633 iops : min= 22, max= 82, avg=73.90, stdev=18.36, samples=10 00:21:03.633 lat (msec) : 50=19.10%, 100=23.25%, 250=6.41%, 500=2.25%, 750=24.67% 00:21:03.633 lat (msec) : 1000=22.66%, 2000=1.66% 00:21:03.633 cpu : usr=0.20%, sys=0.50%, ctx=503, majf=0, minf=1 00:21:03.634 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.5% 00:21:03.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.634 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.634 issued rwts: total=414,429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.634 job12: (groupid=0, jobs=1): err= 0: pid=91868: Tue Jul 23 05:12:03 2024 00:21:03.634 read: IOPS=87, BW=10.9MiB/s (11.4MB/s)(59.1MiB/5426msec) 00:21:03.634 slat (nsec): min=6770, max=92693, avg=26163.11, stdev=14819.60 00:21:03.634 clat (msec): min=9, max=469, avg=64.42, stdev=55.53 00:21:03.634 lat (msec): min=9, max=469, avg=64.44, stdev=55.53 00:21:03.634 clat percentiles (msec): 00:21:03.634 | 1.00th=[ 14], 5.00th=[ 38], 10.00th=[ 46], 20.00th=[ 48], 00:21:03.634 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 51], 00:21:03.634 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 87], 95.00th=[ 157], 00:21:03.634 | 99.00th=[ 300], 99.50th=[ 443], 99.90th=[ 468], 99.95th=[ 468], 00:21:03.634 | 99.99th=[ 468] 00:21:03.634 bw ( KiB/s): min= 7936, max=17186, per=3.96%, avg=11984.20, stdev=3077.90, samples=10 00:21:03.634 iops : min= 62, max= 134, avg=93.60, stdev=24.00, samples=10 00:21:03.634 write: IOPS=78, BW=9.84MiB/s (10.3MB/s)(53.4MiB/5426msec); 0 zone resets 00:21:03.634 slat (usec): min=12, max=165, avg=33.55, stdev=17.77 00:21:03.634 clat (msec): min=237, max=1134, avg=740.77, stdev=106.10 00:21:03.634 lat (msec): min=237, max=1134, avg=740.80, stdev=106.10 00:21:03.634 clat percentiles (msec): 00:21:03.634 | 1.00th=[ 393], 5.00th=[ 558], 10.00th=[ 693], 20.00th=[ 709], 00:21:03.634 | 30.00th=[ 735], 40.00th=[ 735], 50.00th=[ 743], 60.00th=[ 751], 00:21:03.634 | 70.00th=[ 760], 80.00th=[ 768], 90.00th=[ 785], 95.00th=[ 902], 00:21:03.634 | 99.00th=[ 1116], 99.50th=[ 1133], 99.90th=[ 1133], 99.95th=[ 1133], 00:21:03.634 | 99.99th=[ 1133] 00:21:03.634 bw ( KiB/s): min= 2565, max=10496, per=3.11%, avg=9421.30, stdev=2423.46, samples=10 00:21:03.634 iops : min= 20, max= 82, avg=73.60, stdev=18.95, samples=10 00:21:03.634 lat (msec) : 10=0.22%, 20=0.33%, 50=22.56%, 100=24.67%, 250=3.33% 00:21:03.634 lat (msec) : 500=3.11%, 750=26.44%, 1000=17.67%, 2000=1.67% 00:21:03.634 cpu : usr=0.24%, sys=0.37%, ctx=527, majf=0, minf=1 00:21:03.634 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:21:03.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.634 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.634 issued rwts: total=473,427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.634 job13: (groupid=0, jobs=1): err= 0: pid=91869: Tue Jul 23 05:12:03 2024 00:21:03.634 read: IOPS=81, BW=10.1MiB/s (10.6MB/s)(55.0MiB/5423msec) 00:21:03.634 slat (nsec): min=8527, max=87910, avg=21692.03, stdev=9797.80 00:21:03.634 clat (msec): min=13, max=461, avg=64.56, stdev=51.01 00:21:03.634 lat (msec): min=13, max=461, avg=64.58, stdev=51.01 00:21:03.634 clat percentiles (msec): 00:21:03.634 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 49], 00:21:03.634 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.634 | 70.00th=[ 52], 80.00th=[ 55], 90.00th=[ 103], 95.00th=[ 146], 00:21:03.634 | 99.00th=[ 279], 99.50th=[ 447], 99.90th=[ 464], 99.95th=[ 464], 00:21:03.634 | 99.99th=[ 464] 00:21:03.634 bw ( KiB/s): min= 6144, max=16640, per=3.68%, avg=11133.80, stdev=3254.73, samples=10 00:21:03.634 iops : min= 48, max= 130, avg=86.90, stdev=25.44, samples=10 00:21:03.634 write: IOPS=78, BW=9.87MiB/s (10.3MB/s)(53.5MiB/5423msec); 0 zone resets 00:21:03.634 slat (nsec): min=12188, max=67195, avg=26329.60, stdev=8716.52 00:21:03.634 clat (msec): min=249, max=1162, avg=743.36, stdev=105.86 00:21:03.634 lat (msec): min=249, max=1162, avg=743.39, stdev=105.87 00:21:03.634 clat percentiles (msec): 00:21:03.634 | 1.00th=[ 393], 5.00th=[ 535], 10.00th=[ 693], 20.00th=[ 718], 00:21:03.634 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.634 | 70.00th=[ 760], 80.00th=[ 776], 90.00th=[ 802], 95.00th=[ 911], 00:21:03.634 | 99.00th=[ 1083], 99.50th=[ 1116], 99.90th=[ 1167], 99.95th=[ 1167], 00:21:03.634 | 99.99th=[ 1167] 00:21:03.634 bw ( KiB/s): min= 2816, max=10496, per=3.12%, avg=9444.40, stdev=2350.27, samples=10 00:21:03.634 iops : min= 22, max= 82, avg=73.70, stdev=18.34, samples=10 00:21:03.634 lat (msec) : 20=0.23%, 50=21.54%, 100=23.62%, 250=4.84%, 500=2.76% 00:21:03.634 lat (msec) : 750=24.19%, 1000=21.43%, 2000=1.38% 00:21:03.634 cpu : usr=0.11%, sys=0.41%, ctx=550, majf=0, minf=1 00:21:03.634 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.7% 00:21:03.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.634 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.634 issued rwts: total=440,428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.634 job14: (groupid=0, jobs=1): err= 0: pid=91870: Tue Jul 23 05:12:03 2024 00:21:03.634 read: IOPS=75, BW=9706KiB/s (9939kB/s)(51.2MiB/5407msec) 00:21:03.634 slat (usec): min=9, max=559, avg=41.79, stdev=51.35 00:21:03.634 clat (msec): min=35, max=452, avg=67.04, stdev=55.97 00:21:03.634 lat (msec): min=35, max=452, avg=67.08, stdev=55.97 00:21:03.634 clat percentiles (msec): 00:21:03.634 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 49], 00:21:03.634 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.634 | 70.00th=[ 52], 80.00th=[ 53], 90.00th=[ 97], 95.00th=[ 215], 00:21:03.634 | 99.00th=[ 284], 99.50th=[ 430], 99.90th=[ 451], 99.95th=[ 451], 00:21:03.634 | 99.99th=[ 451] 00:21:03.634 bw ( KiB/s): min= 8448, max=12288, per=3.44%, avg=10391.20, stdev=1217.00, samples=10 00:21:03.634 iops : min= 66, max= 96, avg=81.10, stdev= 9.41, samples=10 00:21:03.634 write: IOPS=79, BW=9.89MiB/s (10.4MB/s)(53.5MiB/5407msec); 0 zone resets 00:21:03.634 slat (usec): min=12, max=3126, avg=56.10, stdev=159.35 00:21:03.634 clat (msec): min=253, max=1166, avg=743.08, stdev=105.11 00:21:03.634 lat (msec): min=253, max=1166, avg=743.13, stdev=105.12 00:21:03.634 clat percentiles (msec): 00:21:03.634 | 1.00th=[ 393], 5.00th=[ 527], 10.00th=[ 676], 20.00th=[ 726], 00:21:03.634 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 743], 60.00th=[ 751], 00:21:03.634 | 70.00th=[ 760], 80.00th=[ 776], 90.00th=[ 802], 95.00th=[ 894], 00:21:03.634 | 99.00th=[ 1116], 99.50th=[ 1133], 99.90th=[ 1167], 99.95th=[ 1167], 00:21:03.634 | 99.99th=[ 1167] 00:21:03.634 bw ( KiB/s): min= 2560, max=10496, per=3.12%, avg=9444.30, stdev=2432.22, samples=10 00:21:03.634 iops : min= 20, max= 82, avg=73.70, stdev=18.97, samples=10 00:21:03.634 lat (msec) : 50=21.60%, 100=22.43%, 250=4.30%, 500=2.51%, 750=26.01% 00:21:03.634 lat (msec) : 1000=21.84%, 2000=1.31% 00:21:03.634 cpu : usr=0.17%, sys=0.44%, ctx=690, majf=0, minf=1 00:21:03.634 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:21:03.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.634 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.634 issued rwts: total=410,428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.634 job15: (groupid=0, jobs=1): err= 0: pid=91871: Tue Jul 23 05:12:03 2024 00:21:03.634 read: IOPS=74, BW=9535KiB/s (9764kB/s)(50.6MiB/5437msec) 00:21:03.634 slat (usec): min=9, max=508, avg=28.41, stdev=32.88 00:21:03.634 clat (msec): min=14, max=462, avg=71.03, stdev=66.09 00:21:03.634 lat (msec): min=14, max=462, avg=71.06, stdev=66.09 00:21:03.634 clat percentiles (msec): 00:21:03.634 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 49], 00:21:03.634 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.634 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 130], 95.00th=[ 275], 00:21:03.634 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 464], 99.95th=[ 464], 00:21:03.634 | 99.99th=[ 464] 00:21:03.634 bw ( KiB/s): min= 6656, max=15104, per=3.40%, avg=10291.20, stdev=2920.84, samples=10 00:21:03.634 iops : min= 52, max= 118, avg=80.40, stdev=22.82, samples=10 00:21:03.634 write: IOPS=79, BW=9.91MiB/s (10.4MB/s)(53.9MiB/5437msec); 0 zone resets 00:21:03.634 slat (usec): min=11, max=5048, avg=45.51, stdev=246.19 00:21:03.634 clat (msec): min=78, max=1187, avg=738.91, stdev=121.16 00:21:03.634 lat (msec): min=78, max=1187, avg=738.95, stdev=121.14 00:21:03.634 clat percentiles (msec): 00:21:03.634 | 1.00th=[ 317], 5.00th=[ 510], 10.00th=[ 617], 20.00th=[ 709], 00:21:03.634 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.634 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 802], 95.00th=[ 911], 00:21:03.634 | 99.00th=[ 1150], 99.50th=[ 1183], 99.90th=[ 1183], 99.95th=[ 1183], 00:21:03.634 | 99.99th=[ 1183] 00:21:03.634 bw ( KiB/s): min= 2816, max=10496, per=3.13%, avg=9472.00, stdev=2352.48, samples=10 00:21:03.634 iops : min= 22, max= 82, avg=74.00, stdev=18.38, samples=10 00:21:03.634 lat (msec) : 20=0.24%, 50=20.33%, 100=22.49%, 250=2.03%, 500=5.50% 00:21:03.634 lat (msec) : 750=24.52%, 1000=23.09%, 2000=1.79% 00:21:03.634 cpu : usr=0.18%, sys=0.37%, ctx=528, majf=0, minf=1 00:21:03.634 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:21:03.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.634 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.634 issued rwts: total=405,431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.634 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.634 job16: (groupid=0, jobs=1): err= 0: pid=91872: Tue Jul 23 05:12:03 2024 00:21:03.634 read: IOPS=84, BW=10.5MiB/s (11.1MB/s)(57.0MiB/5406msec) 00:21:03.634 slat (usec): min=9, max=638, avg=36.03, stdev=57.78 00:21:03.634 clat (msec): min=29, max=440, avg=69.39, stdev=53.16 00:21:03.634 lat (msec): min=29, max=440, avg=69.42, stdev=53.16 00:21:03.634 clat percentiles (msec): 00:21:03.634 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 49], 00:21:03.634 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.634 | 70.00th=[ 52], 80.00th=[ 58], 90.00th=[ 144], 95.00th=[ 182], 00:21:03.634 | 99.00th=[ 234], 99.50th=[ 418], 99.90th=[ 439], 99.95th=[ 439], 00:21:03.634 | 99.99th=[ 439] 00:21:03.634 bw ( KiB/s): min= 8704, max=18981, per=3.83%, avg=11572.60, stdev=3578.07, samples=10 00:21:03.634 iops : min= 68, max= 148, avg=90.30, stdev=27.90, samples=10 00:21:03.634 write: IOPS=79, BW=9.90MiB/s (10.4MB/s)(53.5MiB/5406msec); 0 zone resets 00:21:03.634 slat (usec): min=12, max=792, avg=47.06, stdev=74.34 00:21:03.635 clat (msec): min=234, max=1142, avg=733.19, stdev=109.00 00:21:03.635 lat (msec): min=234, max=1142, avg=733.24, stdev=109.01 00:21:03.635 clat percentiles (msec): 00:21:03.635 | 1.00th=[ 380], 5.00th=[ 531], 10.00th=[ 642], 20.00th=[ 709], 00:21:03.635 | 30.00th=[ 735], 40.00th=[ 735], 50.00th=[ 743], 60.00th=[ 751], 00:21:03.635 | 70.00th=[ 760], 80.00th=[ 760], 90.00th=[ 785], 95.00th=[ 894], 00:21:03.635 | 99.00th=[ 1099], 99.50th=[ 1133], 99.90th=[ 1150], 99.95th=[ 1150], 00:21:03.635 | 99.99th=[ 1150] 00:21:03.635 bw ( KiB/s): min= 2821, max=10496, per=3.12%, avg=9444.80, stdev=2341.96, samples=10 00:21:03.635 iops : min= 22, max= 82, avg=73.70, stdev=18.27, samples=10 00:21:03.635 lat (msec) : 50=21.27%, 100=22.62%, 250=7.47%, 500=2.15%, 750=29.07% 00:21:03.635 lat (msec) : 1000=15.95%, 2000=1.47% 00:21:03.635 cpu : usr=0.13%, sys=0.46%, ctx=706, majf=0, minf=1 00:21:03.635 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:21:03.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.635 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.635 issued rwts: total=456,428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.635 job17: (groupid=0, jobs=1): err= 0: pid=91873: Tue Jul 23 05:12:03 2024 00:21:03.635 read: IOPS=74, BW=9475KiB/s (9703kB/s)(50.1MiB/5417msec) 00:21:03.635 slat (usec): min=6, max=9656, avg=50.27, stdev=481.41 00:21:03.635 clat (msec): min=35, max=441, avg=68.58, stdev=49.26 00:21:03.635 lat (msec): min=35, max=441, avg=68.63, stdev=49.28 00:21:03.635 clat percentiles (msec): 00:21:03.635 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:21:03.635 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.635 | 70.00th=[ 52], 80.00th=[ 55], 90.00th=[ 146], 95.00th=[ 194], 00:21:03.635 | 99.00th=[ 222], 99.50th=[ 228], 99.90th=[ 443], 99.95th=[ 443], 00:21:03.635 | 99.99th=[ 443] 00:21:03.635 bw ( KiB/s): min= 7936, max=16606, per=3.37%, avg=10183.70, stdev=2668.57, samples=10 00:21:03.635 iops : min= 62, max= 129, avg=79.40, stdev=20.73, samples=10 00:21:03.635 write: IOPS=79, BW=9.95MiB/s (10.4MB/s)(53.9MiB/5417msec); 0 zone resets 00:21:03.635 slat (usec): min=11, max=1057, avg=38.41, stdev=66.40 00:21:03.635 clat (msec): min=237, max=1131, avg=737.71, stdev=108.74 00:21:03.635 lat (msec): min=238, max=1131, avg=737.75, stdev=108.72 00:21:03.635 clat percentiles (msec): 00:21:03.635 | 1.00th=[ 372], 5.00th=[ 527], 10.00th=[ 651], 20.00th=[ 709], 00:21:03.635 | 30.00th=[ 726], 40.00th=[ 735], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.635 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 802], 95.00th=[ 919], 00:21:03.635 | 99.00th=[ 1083], 99.50th=[ 1099], 99.90th=[ 1133], 99.95th=[ 1133], 00:21:03.635 | 99.99th=[ 1133] 00:21:03.635 bw ( KiB/s): min= 2810, max=10496, per=3.13%, avg=9469.30, stdev=2353.61, samples=10 00:21:03.635 iops : min= 21, max= 82, avg=73.80, stdev=18.66, samples=10 00:21:03.635 lat (msec) : 50=20.79%, 100=21.27%, 250=6.13%, 500=2.04%, 750=25.12% 00:21:03.635 lat (msec) : 1000=23.20%, 2000=1.44% 00:21:03.635 cpu : usr=0.18%, sys=0.33%, ctx=586, majf=0, minf=1 00:21:03.635 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.4% 00:21:03.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.635 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.635 issued rwts: total=401,431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.635 job18: (groupid=0, jobs=1): err= 0: pid=91874: Tue Jul 23 05:12:03 2024 00:21:03.635 read: IOPS=79, BW=9.92MiB/s (10.4MB/s)(53.8MiB/5419msec) 00:21:03.635 slat (usec): min=8, max=282, avg=26.06, stdev=20.71 00:21:03.635 clat (msec): min=34, max=443, avg=63.86, stdev=44.30 00:21:03.635 lat (msec): min=34, max=443, avg=63.89, stdev=44.30 00:21:03.635 clat percentiles (msec): 00:21:03.635 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:21:03.635 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.635 | 70.00th=[ 52], 80.00th=[ 53], 90.00th=[ 99], 95.00th=[ 150], 00:21:03.635 | 99.00th=[ 218], 99.50th=[ 234], 99.90th=[ 443], 99.95th=[ 443], 00:21:03.635 | 99.99th=[ 443] 00:21:03.635 bw ( KiB/s): min= 8704, max=15073, per=3.62%, avg=10951.60, stdev=1818.20, samples=10 00:21:03.635 iops : min= 68, max= 117, avg=85.40, stdev=14.04, samples=10 00:21:03.635 write: IOPS=79, BW=9.96MiB/s (10.4MB/s)(54.0MiB/5419msec); 0 zone resets 00:21:03.635 slat (usec): min=13, max=445, avg=30.46, stdev=28.49 00:21:03.635 clat (msec): min=241, max=1120, avg=738.13, stdev=107.69 00:21:03.635 lat (msec): min=241, max=1120, avg=738.16, stdev=107.70 00:21:03.635 clat percentiles (msec): 00:21:03.635 | 1.00th=[ 384], 5.00th=[ 518], 10.00th=[ 659], 20.00th=[ 718], 00:21:03.635 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 743], 60.00th=[ 751], 00:21:03.635 | 70.00th=[ 760], 80.00th=[ 776], 90.00th=[ 802], 95.00th=[ 869], 00:21:03.635 | 99.00th=[ 1099], 99.50th=[ 1099], 99.90th=[ 1116], 99.95th=[ 1116], 00:21:03.635 | 99.99th=[ 1116] 00:21:03.635 bw ( KiB/s): min= 2810, max=10496, per=3.13%, avg=9469.30, stdev=2353.61, samples=10 00:21:03.635 iops : min= 21, max= 82, avg=73.80, stdev=18.66, samples=10 00:21:03.635 lat (msec) : 50=20.77%, 100=24.36%, 250=4.64%, 500=2.44%, 750=26.91% 00:21:03.635 lat (msec) : 1000=19.49%, 2000=1.39% 00:21:03.635 cpu : usr=0.11%, sys=0.44%, ctx=536, majf=0, minf=1 00:21:03.635 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:21:03.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.635 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.635 issued rwts: total=430,432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.635 job19: (groupid=0, jobs=1): err= 0: pid=91875: Tue Jul 23 05:12:03 2024 00:21:03.635 read: IOPS=88, BW=11.0MiB/s (11.6MB/s)(59.9MiB/5423msec) 00:21:03.635 slat (usec): min=9, max=201, avg=25.57, stdev=14.89 00:21:03.635 clat (msec): min=5, max=466, avg=64.97, stdev=60.53 00:21:03.635 lat (msec): min=5, max=466, avg=65.00, stdev=60.53 00:21:03.635 clat percentiles (msec): 00:21:03.635 | 1.00th=[ 9], 5.00th=[ 34], 10.00th=[ 47], 20.00th=[ 48], 00:21:03.635 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 51], 00:21:03.635 | 70.00th=[ 52], 80.00th=[ 53], 90.00th=[ 82], 95.00th=[ 218], 00:21:03.635 | 99.00th=[ 430], 99.50th=[ 443], 99.90th=[ 468], 99.95th=[ 468], 00:21:03.635 | 99.99th=[ 468] 00:21:03.635 bw ( KiB/s): min= 9216, max=17920, per=4.00%, avg=12106.40, stdev=2625.09, samples=10 00:21:03.635 iops : min= 72, max= 140, avg=94.50, stdev=20.52, samples=10 00:21:03.635 write: IOPS=78, BW=9.84MiB/s (10.3MB/s)(53.4MiB/5423msec); 0 zone resets 00:21:03.635 slat (usec): min=10, max=114, avg=30.70, stdev=13.72 00:21:03.635 clat (msec): min=131, max=1119, avg=738.73, stdev=109.73 00:21:03.635 lat (msec): min=131, max=1119, avg=738.76, stdev=109.74 00:21:03.635 clat percentiles (msec): 00:21:03.635 | 1.00th=[ 326], 5.00th=[ 550], 10.00th=[ 676], 20.00th=[ 709], 00:21:03.635 | 30.00th=[ 726], 40.00th=[ 735], 50.00th=[ 743], 60.00th=[ 751], 00:21:03.635 | 70.00th=[ 760], 80.00th=[ 768], 90.00th=[ 793], 95.00th=[ 911], 00:21:03.635 | 99.00th=[ 1099], 99.50th=[ 1116], 99.90th=[ 1116], 99.95th=[ 1116], 00:21:03.635 | 99.99th=[ 1116] 00:21:03.635 bw ( KiB/s): min= 2560, max=10496, per=3.12%, avg=9444.30, stdev=2429.23, samples=10 00:21:03.635 iops : min= 20, max= 82, avg=73.70, stdev=18.95, samples=10 00:21:03.635 lat (msec) : 10=0.55%, 20=1.32%, 50=23.07%, 100=23.40%, 250=2.43% 00:21:03.635 lat (msec) : 500=3.64%, 750=27.15%, 1000=17.00%, 2000=1.43% 00:21:03.635 cpu : usr=0.22%, sys=0.42%, ctx=501, majf=0, minf=1 00:21:03.635 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:21:03.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.635 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.635 issued rwts: total=479,427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.635 job20: (groupid=0, jobs=1): err= 0: pid=91876: Tue Jul 23 05:12:03 2024 00:21:03.635 read: IOPS=75, BW=9675KiB/s (9907kB/s)(51.1MiB/5411msec) 00:21:03.635 slat (nsec): min=8756, max=90877, avg=25174.02, stdev=12359.82 00:21:03.635 clat (msec): min=28, max=414, avg=64.64, stdev=45.46 00:21:03.635 lat (msec): min=28, max=414, avg=64.67, stdev=45.46 00:21:03.635 clat percentiles (msec): 00:21:03.635 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 49], 00:21:03.635 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 51], 00:21:03.635 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 126], 95.00th=[ 182], 00:21:03.635 | 99.00th=[ 220], 99.50th=[ 228], 99.90th=[ 414], 99.95th=[ 414], 00:21:03.635 | 99.99th=[ 414] 00:21:03.635 bw ( KiB/s): min= 6656, max=13056, per=3.44%, avg=10416.60, stdev=2464.67, samples=10 00:21:03.635 iops : min= 52, max= 102, avg=81.30, stdev=19.17, samples=10 00:21:03.635 write: IOPS=79, BW=9.98MiB/s (10.5MB/s)(54.0MiB/5411msec); 0 zone resets 00:21:03.635 slat (nsec): min=12144, max=82177, avg=29626.38, stdev=12340.80 00:21:03.635 clat (msec): min=240, max=1116, avg=739.31, stdev=105.15 00:21:03.635 lat (msec): min=240, max=1116, avg=739.34, stdev=105.16 00:21:03.635 clat percentiles (msec): 00:21:03.635 | 1.00th=[ 380], 5.00th=[ 527], 10.00th=[ 659], 20.00th=[ 726], 00:21:03.635 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 743], 60.00th=[ 760], 00:21:03.635 | 70.00th=[ 760], 80.00th=[ 776], 90.00th=[ 802], 95.00th=[ 894], 00:21:03.636 | 99.00th=[ 1070], 99.50th=[ 1099], 99.90th=[ 1116], 99.95th=[ 1116], 00:21:03.636 | 99.99th=[ 1116] 00:21:03.636 bw ( KiB/s): min= 2810, max=10496, per=3.13%, avg=9471.40, stdev=2357.45, samples=10 00:21:03.636 iops : min= 21, max= 82, avg=73.90, stdev=18.72, samples=10 00:21:03.636 lat (msec) : 50=22.24%, 100=21.17%, 250=5.11%, 500=2.26%, 750=25.68% 00:21:03.636 lat (msec) : 1000=22.12%, 2000=1.43% 00:21:03.636 cpu : usr=0.26%, sys=0.33%, ctx=505, majf=0, minf=1 00:21:03.636 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:21:03.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.636 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.636 issued rwts: total=409,432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.636 job21: (groupid=0, jobs=1): err= 0: pid=91877: Tue Jul 23 05:12:03 2024 00:21:03.636 read: IOPS=72, BW=9280KiB/s (9503kB/s)(49.0MiB/5407msec) 00:21:03.636 slat (nsec): min=10089, max=84378, avg=27295.72, stdev=11805.96 00:21:03.636 clat (msec): min=36, max=425, avg=68.35, stdev=57.10 00:21:03.636 lat (msec): min=36, max=425, avg=68.38, stdev=57.10 00:21:03.636 clat percentiles (msec): 00:21:03.636 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 49], 00:21:03.636 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.636 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 136], 95.00th=[ 201], 00:21:03.636 | 99.00th=[ 414], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:21:03.636 | 99.99th=[ 426] 00:21:03.636 bw ( KiB/s): min= 7680, max=13285, per=3.28%, avg=9907.00, stdev=1813.25, samples=10 00:21:03.636 iops : min= 60, max= 103, avg=77.30, stdev=13.97, samples=10 00:21:03.636 write: IOPS=79, BW=9.92MiB/s (10.4MB/s)(53.6MiB/5407msec); 0 zone resets 00:21:03.636 slat (nsec): min=13295, max=89830, avg=31890.05, stdev=12351.45 00:21:03.636 clat (msec): min=241, max=1126, avg=742.96, stdev=105.27 00:21:03.636 lat (msec): min=241, max=1126, avg=742.99, stdev=105.27 00:21:03.636 clat percentiles (msec): 00:21:03.636 | 1.00th=[ 376], 5.00th=[ 542], 10.00th=[ 651], 20.00th=[ 718], 00:21:03.636 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.636 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 802], 95.00th=[ 902], 00:21:03.636 | 99.00th=[ 1083], 99.50th=[ 1116], 99.90th=[ 1133], 99.95th=[ 1133], 00:21:03.636 | 99.99th=[ 1133] 00:21:03.636 bw ( KiB/s): min= 2821, max=10496, per=3.14%, avg=9496.00, stdev=2357.75, samples=10 00:21:03.636 iops : min= 22, max= 82, avg=74.10, stdev=18.41, samples=10 00:21:03.636 lat (msec) : 50=21.92%, 100=20.22%, 250=5.12%, 500=2.68%, 750=24.73% 00:21:03.636 lat (msec) : 1000=24.12%, 2000=1.22% 00:21:03.636 cpu : usr=0.22%, sys=0.44%, ctx=488, majf=0, minf=1 00:21:03.636 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.3% 00:21:03.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.636 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.636 issued rwts: total=392,429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.636 job22: (groupid=0, jobs=1): err= 0: pid=91878: Tue Jul 23 05:12:03 2024 00:21:03.636 read: IOPS=89, BW=11.2MiB/s (11.8MB/s)(60.8MiB/5408msec) 00:21:03.636 slat (usec): min=7, max=221, avg=25.95, stdev=18.19 00:21:03.636 clat (msec): min=36, max=447, avg=64.16, stdev=52.85 00:21:03.636 lat (msec): min=36, max=447, avg=64.18, stdev=52.85 00:21:03.636 clat percentiles (msec): 00:21:03.636 | 1.00th=[ 38], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 49], 00:21:03.636 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 51], 00:21:03.636 | 70.00th=[ 52], 80.00th=[ 53], 90.00th=[ 89], 95.00th=[ 159], 00:21:03.636 | 99.00th=[ 409], 99.50th=[ 435], 99.90th=[ 447], 99.95th=[ 447], 00:21:03.636 | 99.99th=[ 447] 00:21:03.636 bw ( KiB/s): min= 9728, max=14592, per=4.05%, avg=12260.20, stdev=1496.98, samples=10 00:21:03.636 iops : min= 76, max= 114, avg=95.70, stdev=11.78, samples=10 00:21:03.636 write: IOPS=78, BW=9.85MiB/s (10.3MB/s)(53.2MiB/5408msec); 0 zone resets 00:21:03.636 slat (usec): min=10, max=307, avg=33.38, stdev=25.25 00:21:03.636 clat (msec): min=251, max=1125, avg=738.17, stdev=104.58 00:21:03.636 lat (msec): min=251, max=1125, avg=738.21, stdev=104.58 00:21:03.636 clat percentiles (msec): 00:21:03.636 | 1.00th=[ 401], 5.00th=[ 542], 10.00th=[ 676], 20.00th=[ 718], 00:21:03.636 | 30.00th=[ 726], 40.00th=[ 735], 50.00th=[ 743], 60.00th=[ 751], 00:21:03.636 | 70.00th=[ 751], 80.00th=[ 768], 90.00th=[ 810], 95.00th=[ 877], 00:21:03.636 | 99.00th=[ 1083], 99.50th=[ 1083], 99.90th=[ 1133], 99.95th=[ 1133], 00:21:03.636 | 99.99th=[ 1133] 00:21:03.636 bw ( KiB/s): min= 2560, max=10496, per=3.12%, avg=9444.30, stdev=2432.22, samples=10 00:21:03.636 iops : min= 20, max= 82, avg=73.70, stdev=18.97, samples=10 00:21:03.636 lat (msec) : 50=26.21%, 100=22.48%, 250=3.73%, 500=2.85%, 750=28.51% 00:21:03.636 lat (msec) : 1000=14.80%, 2000=1.43% 00:21:03.636 cpu : usr=0.26%, sys=0.37%, ctx=620, majf=0, minf=1 00:21:03.636 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:21:03.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.636 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.636 issued rwts: total=486,426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.636 job23: (groupid=0, jobs=1): err= 0: pid=91879: Tue Jul 23 05:12:03 2024 00:21:03.636 read: IOPS=75, BW=9691KiB/s (9924kB/s)(51.1MiB/5402msec) 00:21:03.636 slat (usec): min=8, max=955, avg=29.83, stdev=48.41 00:21:03.636 clat (msec): min=34, max=440, avg=72.00, stdev=61.07 00:21:03.636 lat (msec): min=34, max=440, avg=72.03, stdev=61.07 00:21:03.636 clat percentiles (msec): 00:21:03.636 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 49], 00:21:03.636 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.636 | 70.00th=[ 52], 80.00th=[ 56], 90.00th=[ 155], 95.00th=[ 194], 00:21:03.636 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 443], 99.95th=[ 443], 00:21:03.636 | 99.99th=[ 443] 00:21:03.636 bw ( KiB/s): min= 6656, max=16673, per=3.40%, avg=10291.70, stdev=3283.95, samples=10 00:21:03.636 iops : min= 52, max= 130, avg=80.30, stdev=25.51, samples=10 00:21:03.636 write: IOPS=79, BW=9.88MiB/s (10.4MB/s)(53.4MiB/5402msec); 0 zone resets 00:21:03.636 slat (usec): min=12, max=948, avg=38.61, stdev=64.77 00:21:03.636 clat (msec): min=245, max=1118, avg=739.42, stdev=107.33 00:21:03.636 lat (msec): min=246, max=1118, avg=739.46, stdev=107.32 00:21:03.636 clat percentiles (msec): 00:21:03.636 | 1.00th=[ 388], 5.00th=[ 550], 10.00th=[ 634], 20.00th=[ 718], 00:21:03.636 | 30.00th=[ 726], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.636 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 802], 95.00th=[ 902], 00:21:03.636 | 99.00th=[ 1099], 99.50th=[ 1116], 99.90th=[ 1116], 99.95th=[ 1116], 00:21:03.636 | 99.99th=[ 1116] 00:21:03.636 bw ( KiB/s): min= 2565, max=10496, per=3.13%, avg=9470.40, stdev=2435.06, samples=10 00:21:03.636 iops : min= 20, max= 82, avg=73.90, stdev=19.00, samples=10 00:21:03.636 lat (msec) : 50=20.45%, 100=21.89%, 250=5.86%, 500=2.51%, 750=25.00% 00:21:03.636 lat (msec) : 1000=22.85%, 2000=1.44% 00:21:03.636 cpu : usr=0.13%, sys=0.52%, ctx=556, majf=0, minf=1 00:21:03.636 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:21:03.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.636 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.636 issued rwts: total=409,427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.637 job24: (groupid=0, jobs=1): err= 0: pid=91880: Tue Jul 23 05:12:03 2024 00:21:03.637 read: IOPS=83, BW=10.5MiB/s (11.0MB/s)(57.0MiB/5441msec) 00:21:03.637 slat (nsec): min=8634, max=75208, avg=25110.78, stdev=11804.00 00:21:03.637 clat (msec): min=9, max=456, avg=59.61, stdev=41.54 00:21:03.637 lat (msec): min=9, max=456, avg=59.64, stdev=41.54 00:21:03.637 clat percentiles (msec): 00:21:03.637 | 1.00th=[ 16], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 48], 00:21:03.637 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 51], 00:21:03.637 | 70.00th=[ 52], 80.00th=[ 53], 90.00th=[ 70], 95.00th=[ 127], 00:21:03.637 | 99.00th=[ 275], 99.50th=[ 284], 99.90th=[ 456], 99.95th=[ 456], 00:21:03.637 | 99.99th=[ 456] 00:21:03.637 bw ( KiB/s): min= 8960, max=13568, per=3.85%, avg=11648.00, stdev=1489.06, samples=10 00:21:03.637 iops : min= 70, max= 106, avg=91.00, stdev=11.63, samples=10 00:21:03.637 write: IOPS=79, BW=9.90MiB/s (10.4MB/s)(53.9MiB/5441msec); 0 zone resets 00:21:03.637 slat (nsec): min=12525, max=73610, avg=29636.44, stdev=11046.31 00:21:03.637 clat (msec): min=212, max=1134, avg=743.57, stdev=109.40 00:21:03.637 lat (msec): min=212, max=1134, avg=743.60, stdev=109.40 00:21:03.637 clat percentiles (msec): 00:21:03.637 | 1.00th=[ 388], 5.00th=[ 531], 10.00th=[ 659], 20.00th=[ 718], 00:21:03.637 | 30.00th=[ 735], 40.00th=[ 735], 50.00th=[ 743], 60.00th=[ 751], 00:21:03.637 | 70.00th=[ 760], 80.00th=[ 776], 90.00th=[ 835], 95.00th=[ 911], 00:21:03.637 | 99.00th=[ 1099], 99.50th=[ 1116], 99.90th=[ 1133], 99.95th=[ 1133], 00:21:03.637 | 99.99th=[ 1133] 00:21:03.637 bw ( KiB/s): min= 2560, max=10496, per=3.11%, avg=9420.80, stdev=2431.03, samples=10 00:21:03.637 iops : min= 20, max= 82, avg=73.60, stdev=18.99, samples=10 00:21:03.637 lat (msec) : 10=0.23%, 20=0.34%, 50=23.22%, 100=24.35%, 250=2.59% 00:21:03.637 lat (msec) : 500=2.59%, 750=26.04%, 1000=19.05%, 2000=1.58% 00:21:03.637 cpu : usr=0.04%, sys=0.59%, ctx=515, majf=0, minf=1 00:21:03.637 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:21:03.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.637 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.637 issued rwts: total=456,431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.637 job25: (groupid=0, jobs=1): err= 0: pid=91881: Tue Jul 23 05:12:03 2024 00:21:03.637 read: IOPS=80, BW=10.1MiB/s (10.6MB/s)(54.6MiB/5425msec) 00:21:03.637 slat (nsec): min=7061, max=83633, avg=27896.34, stdev=14106.47 00:21:03.637 clat (msec): min=35, max=448, avg=64.76, stdev=48.93 00:21:03.637 lat (msec): min=35, max=448, avg=64.78, stdev=48.92 00:21:03.637 clat percentiles (msec): 00:21:03.637 | 1.00th=[ 37], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 49], 00:21:03.637 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.637 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 97], 95.00th=[ 186], 00:21:03.637 | 99.00th=[ 275], 99.50th=[ 296], 99.90th=[ 447], 99.95th=[ 447], 00:21:03.637 | 99.99th=[ 447] 00:21:03.637 bw ( KiB/s): min= 8960, max=14592, per=3.68%, avg=11134.00, stdev=1713.83, samples=10 00:21:03.637 iops : min= 70, max= 114, avg=86.90, stdev=13.47, samples=10 00:21:03.637 write: IOPS=79, BW=9.91MiB/s (10.4MB/s)(53.8MiB/5425msec); 0 zone resets 00:21:03.637 slat (usec): min=11, max=241, avg=34.52, stdev=17.79 00:21:03.637 clat (msec): min=251, max=1151, avg=740.48, stdev=105.41 00:21:03.637 lat (msec): min=251, max=1151, avg=740.52, stdev=105.40 00:21:03.637 clat percentiles (msec): 00:21:03.637 | 1.00th=[ 397], 5.00th=[ 531], 10.00th=[ 651], 20.00th=[ 718], 00:21:03.637 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 751], 00:21:03.637 | 70.00th=[ 760], 80.00th=[ 768], 90.00th=[ 785], 95.00th=[ 927], 00:21:03.637 | 99.00th=[ 1099], 99.50th=[ 1133], 99.90th=[ 1150], 99.95th=[ 1150], 00:21:03.637 | 99.99th=[ 1150] 00:21:03.637 bw ( KiB/s): min= 2560, max=10496, per=3.12%, avg=9444.30, stdev=2434.97, samples=10 00:21:03.637 iops : min= 20, max= 82, avg=73.70, stdev=18.99, samples=10 00:21:03.637 lat (msec) : 50=23.18%, 100=22.72%, 250=3.81%, 500=2.54%, 750=24.57% 00:21:03.637 lat (msec) : 1000=22.03%, 2000=1.15% 00:21:03.637 cpu : usr=0.13%, sys=0.52%, ctx=522, majf=0, minf=1 00:21:03.637 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.7% 00:21:03.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.637 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.637 issued rwts: total=437,430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.637 job26: (groupid=0, jobs=1): err= 0: pid=91882: Tue Jul 23 05:12:03 2024 00:21:03.637 read: IOPS=72, BW=9268KiB/s (9490kB/s)(49.0MiB/5414msec) 00:21:03.637 slat (usec): min=8, max=673, avg=33.52, stdev=39.54 00:21:03.637 clat (msec): min=35, max=462, avg=67.68, stdev=49.44 00:21:03.637 lat (msec): min=35, max=462, avg=67.71, stdev=49.43 00:21:03.637 clat percentiles (msec): 00:21:03.637 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 49], 00:21:03.637 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.637 | 70.00th=[ 52], 80.00th=[ 55], 90.00th=[ 123], 95.00th=[ 211], 00:21:03.637 | 99.00th=[ 234], 99.50th=[ 251], 99.90th=[ 464], 99.95th=[ 464], 00:21:03.637 | 99.99th=[ 464] 00:21:03.637 bw ( KiB/s): min= 8192, max=15390, per=3.31%, avg=10010.70, stdev=2030.38, samples=10 00:21:03.637 iops : min= 64, max= 120, avg=78.10, stdev=15.82, samples=10 00:21:03.637 write: IOPS=79, BW=9.95MiB/s (10.4MB/s)(53.9MiB/5414msec); 0 zone resets 00:21:03.637 slat (usec): min=12, max=449, avg=38.46, stdev=34.19 00:21:03.637 clat (msec): min=245, max=1152, avg=741.22, stdev=112.52 00:21:03.637 lat (msec): min=245, max=1152, avg=741.26, stdev=112.52 00:21:03.637 clat percentiles (msec): 00:21:03.637 | 1.00th=[ 384], 5.00th=[ 514], 10.00th=[ 651], 20.00th=[ 718], 00:21:03.637 | 30.00th=[ 726], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.637 | 70.00th=[ 768], 80.00th=[ 776], 90.00th=[ 802], 95.00th=[ 927], 00:21:03.637 | 99.00th=[ 1116], 99.50th=[ 1133], 99.90th=[ 1150], 99.95th=[ 1150], 00:21:03.637 | 99.99th=[ 1150] 00:21:03.637 bw ( KiB/s): min= 2565, max=10496, per=3.12%, avg=9444.80, stdev=2430.40, samples=10 00:21:03.637 iops : min= 20, max= 82, avg=73.70, stdev=18.96, samples=10 00:21:03.637 lat (msec) : 50=20.29%, 100=21.39%, 250=5.95%, 500=2.19%, 750=22.84% 00:21:03.637 lat (msec) : 1000=25.64%, 2000=1.70% 00:21:03.637 cpu : usr=0.15%, sys=0.50%, ctx=514, majf=0, minf=1 00:21:03.637 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.3% 00:21:03.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.637 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.637 issued rwts: total=392,431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.637 job27: (groupid=0, jobs=1): err= 0: pid=91883: Tue Jul 23 05:12:03 2024 00:21:03.637 read: IOPS=74, BW=9498KiB/s (9726kB/s)(50.4MiB/5431msec) 00:21:03.637 slat (usec): min=9, max=625, avg=27.26, stdev=44.27 00:21:03.637 clat (msec): min=17, max=460, avg=67.83, stdev=50.56 00:21:03.637 lat (msec): min=17, max=460, avg=67.86, stdev=50.56 00:21:03.637 clat percentiles (msec): 00:21:03.637 | 1.00th=[ 28], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:21:03.637 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.637 | 70.00th=[ 53], 80.00th=[ 57], 90.00th=[ 142], 95.00th=[ 174], 00:21:03.637 | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 460], 99.95th=[ 460], 00:21:03.637 | 99.99th=[ 460] 00:21:03.637 bw ( KiB/s): min= 7424, max=17152, per=3.39%, avg=10263.30, stdev=3213.58, samples=10 00:21:03.637 iops : min= 58, max= 134, avg=80.10, stdev=25.08, samples=10 00:21:03.637 write: IOPS=79, BW=9.90MiB/s (10.4MB/s)(53.8MiB/5431msec); 0 zone resets 00:21:03.637 slat (usec): min=11, max=3945, avg=39.99, stdev=190.76 00:21:03.637 clat (msec): min=250, max=1186, avg=743.52, stdev=109.45 00:21:03.637 lat (msec): min=250, max=1186, avg=743.56, stdev=109.45 00:21:03.637 clat percentiles (msec): 00:21:03.637 | 1.00th=[ 393], 5.00th=[ 535], 10.00th=[ 667], 20.00th=[ 726], 00:21:03.637 | 30.00th=[ 735], 40.00th=[ 743], 50.00th=[ 751], 60.00th=[ 760], 00:21:03.637 | 70.00th=[ 760], 80.00th=[ 768], 90.00th=[ 785], 95.00th=[ 919], 00:21:03.637 | 99.00th=[ 1133], 99.50th=[ 1167], 99.90th=[ 1183], 99.95th=[ 1183], 00:21:03.637 | 99.99th=[ 1183] 00:21:03.637 bw ( KiB/s): min= 2560, max=10496, per=3.12%, avg=9444.30, stdev=2435.21, samples=10 00:21:03.637 iops : min= 20, max= 82, avg=73.70, stdev=19.00, samples=10 00:21:03.637 lat (msec) : 20=0.24%, 50=19.45%, 100=22.33%, 250=5.64%, 500=3.00% 00:21:03.637 lat (msec) : 750=23.29%, 1000=24.37%, 2000=1.68% 00:21:03.637 cpu : usr=0.09%, sys=0.41%, ctx=574, majf=0, minf=1 00:21:03.637 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.4% 00:21:03.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.637 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.637 issued rwts: total=403,430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.637 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.637 job28: (groupid=0, jobs=1): err= 0: pid=91884: Tue Jul 23 05:12:03 2024 00:21:03.637 read: IOPS=89, BW=11.2MiB/s (11.7MB/s)(60.6MiB/5432msec) 00:21:03.637 slat (usec): min=9, max=315, avg=27.47, stdev=23.61 00:21:03.637 clat (msec): min=10, max=448, avg=64.45, stdev=52.94 00:21:03.637 lat (msec): min=10, max=448, avg=64.48, stdev=52.94 00:21:03.637 clat percentiles (msec): 00:21:03.637 | 1.00th=[ 18], 5.00th=[ 38], 10.00th=[ 46], 20.00th=[ 48], 00:21:03.637 | 30.00th=[ 50], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 51], 00:21:03.637 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 92], 95.00th=[ 146], 00:21:03.637 | 99.00th=[ 284], 99.50th=[ 435], 99.90th=[ 447], 99.95th=[ 447], 00:21:03.637 | 99.99th=[ 447] 00:21:03.637 bw ( KiB/s): min= 9216, max=17699, per=4.07%, avg=12317.10, stdev=2939.52, samples=10 00:21:03.637 iops : min= 72, max= 138, avg=96.20, stdev=22.91, samples=10 00:21:03.637 write: IOPS=78, BW=9.85MiB/s (10.3MB/s)(53.5MiB/5432msec); 0 zone resets 00:21:03.637 slat (usec): min=10, max=242, avg=33.43, stdev=24.87 00:21:03.638 clat (msec): min=212, max=1174, avg=737.99, stdev=108.36 00:21:03.638 lat (msec): min=213, max=1174, avg=738.03, stdev=108.37 00:21:03.638 clat percentiles (msec): 00:21:03.638 | 1.00th=[ 388], 5.00th=[ 535], 10.00th=[ 684], 20.00th=[ 709], 00:21:03.638 | 30.00th=[ 726], 40.00th=[ 735], 50.00th=[ 743], 60.00th=[ 751], 00:21:03.638 | 70.00th=[ 760], 80.00th=[ 768], 90.00th=[ 785], 95.00th=[ 877], 00:21:03.638 | 99.00th=[ 1116], 99.50th=[ 1133], 99.90th=[ 1183], 99.95th=[ 1183], 00:21:03.638 | 99.99th=[ 1183] 00:21:03.638 bw ( KiB/s): min= 2565, max=10496, per=3.11%, avg=9421.30, stdev=2426.46, samples=10 00:21:03.638 iops : min= 20, max= 82, avg=73.60, stdev=18.97, samples=10 00:21:03.638 lat (msec) : 20=0.55%, 50=23.44%, 100=24.21%, 250=3.72%, 500=2.96% 00:21:03.638 lat (msec) : 750=26.07%, 1000=17.20%, 2000=1.86% 00:21:03.638 cpu : usr=0.13%, sys=0.41%, ctx=771, majf=0, minf=1 00:21:03.638 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:21:03.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.638 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.638 issued rwts: total=485,428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.638 job29: (groupid=0, jobs=1): err= 0: pid=91885: Tue Jul 23 05:12:03 2024 00:21:03.638 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(58.4MiB/5412msec) 00:21:03.638 slat (usec): min=7, max=194, avg=27.89, stdev=17.33 00:21:03.638 clat (msec): min=34, max=445, avg=71.52, stdev=57.22 00:21:03.638 lat (msec): min=34, max=445, avg=71.55, stdev=57.22 00:21:03.638 clat percentiles (msec): 00:21:03.638 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 49], 00:21:03.638 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 52], 00:21:03.638 | 70.00th=[ 52], 80.00th=[ 59], 90.00th=[ 150], 95.00th=[ 205], 00:21:03.638 | 99.00th=[ 271], 99.50th=[ 422], 99.90th=[ 447], 99.95th=[ 447], 00:21:03.638 | 99.99th=[ 447] 00:21:03.638 bw ( KiB/s): min= 7680, max=20480, per=3.92%, avg=11850.20, stdev=3542.68, samples=10 00:21:03.638 iops : min= 60, max= 160, avg=92.50, stdev=27.66, samples=10 00:21:03.638 write: IOPS=79, BW=9.91MiB/s (10.4MB/s)(53.6MiB/5412msec); 0 zone resets 00:21:03.638 slat (usec): min=8, max=3773, avg=40.69, stdev=181.44 00:21:03.638 clat (msec): min=247, max=1149, avg=728.31, stdev=104.53 00:21:03.638 lat (msec): min=247, max=1149, avg=728.35, stdev=104.53 00:21:03.638 clat percentiles (msec): 00:21:03.638 | 1.00th=[ 388], 5.00th=[ 542], 10.00th=[ 617], 20.00th=[ 693], 00:21:03.638 | 30.00th=[ 726], 40.00th=[ 735], 50.00th=[ 743], 60.00th=[ 751], 00:21:03.638 | 70.00th=[ 760], 80.00th=[ 768], 90.00th=[ 785], 95.00th=[ 894], 00:21:03.638 | 99.00th=[ 1062], 99.50th=[ 1099], 99.90th=[ 1150], 99.95th=[ 1150], 00:21:03.638 | 99.99th=[ 1150] 00:21:03.638 bw ( KiB/s): min= 2560, max=10496, per=3.13%, avg=9469.90, stdev=2439.87, samples=10 00:21:03.638 iops : min= 20, max= 82, avg=73.90, stdev=19.03, samples=10 00:21:03.638 lat (msec) : 50=21.99%, 100=22.88%, 250=6.47%, 500=2.46%, 750=28.24% 00:21:03.638 lat (msec) : 1000=16.85%, 2000=1.12% 00:21:03.638 cpu : usr=0.30%, sys=0.39%, ctx=543, majf=0, minf=1 00:21:03.638 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:21:03.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.638 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:03.638 issued rwts: total=467,429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.638 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:03.638 00:21:03.638 Run status group 0 (all jobs): 00:21:03.638 READ: bw=295MiB/s (310MB/s), 8872KiB/s-11.2MiB/s (9085kB/s-11.8MB/s), io=1610MiB (1688MB), run=5402-5450msec 00:21:03.638 WRITE: bw=296MiB/s (310MB/s), 9.84MiB/s-10.0MiB/s (10.3MB/s-10.5MB/s), io=1612MiB (1690MB), run=5402-5450msec 00:21:03.638 00:21:03.638 Disk stats (read/write): 00:21:03.638 sda: ios=455/424, merge=0/0, ticks=25542/304780, in_queue=330323, util=90.70% 00:21:03.638 sdb: ios=423/424, merge=0/0, ticks=24936/304999, in_queue=329935, util=91.37% 00:21:03.638 sdc: ios=494/423, merge=0/0, ticks=27855/303358, in_queue=331213, util=92.29% 00:21:03.638 sdd: ios=464/424, merge=0/0, ticks=26665/303752, in_queue=330417, util=91.97% 00:21:03.638 sde: ios=451/424, merge=0/0, ticks=27160/304204, in_queue=331364, util=92.37% 00:21:03.638 sdf: ios=465/424, merge=0/0, ticks=27138/304265, in_queue=331403, util=91.86% 00:21:03.638 sdg: ios=459/424, merge=0/0, ticks=26351/304295, in_queue=330647, util=92.31% 00:21:03.638 sdh: ios=412/424, merge=0/0, ticks=25377/305737, in_queue=331115, util=92.61% 00:21:03.638 sdi: ios=383/426, merge=0/0, ticks=23033/309385, in_queue=332419, util=92.82% 00:21:03.638 sdj: ios=469/433, merge=0/0, ticks=29409/303385, in_queue=332794, util=93.43% 00:21:03.638 sdk: ios=455/424, merge=0/0, ticks=29156/301059, in_queue=330215, util=92.65% 00:21:03.638 sdl: ios=414/424, merge=0/0, ticks=27954/302847, in_queue=330801, util=93.63% 00:21:03.638 sdm: ios=473/424, merge=0/0, ticks=28853/303274, in_queue=332127, util=94.10% 00:21:03.638 sdn: ios=440/424, merge=0/0, ticks=26805/304736, in_queue=331541, util=94.19% 00:21:03.638 sdo: ios=410/423, merge=0/0, ticks=25841/304250, in_queue=330091, util=93.77% 00:21:03.638 sdp: ios=405/426, merge=0/0, ticks=27947/304071, in_queue=332018, util=94.83% 00:21:03.638 sdq: ios=456/423, merge=0/0, ticks=29998/300107, in_queue=330106, util=94.36% 00:21:03.638 sdr: ios=401/424, merge=0/0, ticks=26714/303177, in_queue=329891, util=94.95% 00:21:03.638 sds: ios=430/424, merge=0/0, ticks=26660/304071, in_queue=330731, util=95.24% 00:21:03.638 sdt: ios=479/426, merge=0/0, ticks=29071/303582, in_queue=332653, util=95.88% 00:21:03.638 sdu: ios=409/423, merge=0/0, ticks=25682/304198, in_queue=329880, util=95.54% 00:21:03.638 sdv: ios=392/424, merge=0/0, ticks=24905/305539, in_queue=330445, util=95.85% 00:21:03.638 sdw: ios=486/424, merge=0/0, ticks=28843/302647, in_queue=331490, util=96.15% 00:21:03.638 sdx: ios=409/424, merge=0/0, ticks=27139/303582, in_queue=330722, util=95.89% 00:21:03.638 sdy: ios=456/424, merge=0/0, ticks=26757/305103, in_queue=331860, util=96.69% 00:21:03.638 sdz: ios=437/423, merge=0/0, ticks=27493/303333, in_queue=330826, util=96.65% 00:21:03.638 sdaa: ios=392/423, merge=0/0, ticks=26101/304082, in_queue=330184, util=96.68% 00:21:03.638 sdab: ios=403/423, merge=0/0, ticks=26461/304716, in_queue=331177, util=96.66% 00:21:03.638 sdac: ios=485/424, merge=0/0, ticks=29983/301957, in_queue=331940, util=97.34% 00:21:03.638 sdad: ios=467/423, merge=0/0, ticks=31790/298484, in_queue=330274, util=97.12% 00:21:03.638 [2024-07-23 05:12:03.142731] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.638 [2024-07-23 05:12:03.144747] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.638 [2024-07-23 05:12:03.146940] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.638 [2024-07-23 05:12:03.148946] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.638 [2024-07-23 05:12:03.151374] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.638 [2024-07-23 05:12:03.153777] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.638 [2024-07-23 05:12:03.156234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.638 [2024-07-23 05:12:03.158434] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.638 05:12:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 262144 -d 16 -t randwrite -r 10 00:21:03.638 [global] 00:21:03.638 thread=1 00:21:03.638 invalidate=1 00:21:03.638 rw=randwrite 00:21:03.638 time_based=1 00:21:03.638 runtime=10 00:21:03.638 ioengine=libaio 00:21:03.638 direct=1 00:21:03.638 bs=262144 00:21:03.638 iodepth=16 00:21:03.638 norandommap=1 00:21:03.638 numjobs=1 00:21:03.638 00:21:03.638 [job0] 00:21:03.638 filename=/dev/sda 00:21:03.638 [job1] 00:21:03.638 filename=/dev/sdb 00:21:03.638 [job2] 00:21:03.638 filename=/dev/sdc 00:21:03.638 [job3] 00:21:03.638 filename=/dev/sdd 00:21:03.638 [job4] 00:21:03.638 filename=/dev/sde 00:21:03.638 [job5] 00:21:03.638 filename=/dev/sdf 00:21:03.638 [job6] 00:21:03.638 filename=/dev/sdg 00:21:03.638 [job7] 00:21:03.638 filename=/dev/sdh 00:21:03.638 [job8] 00:21:03.638 filename=/dev/sdi 00:21:03.638 [job9] 00:21:03.638 filename=/dev/sdj 00:21:03.638 [job10] 00:21:03.638 filename=/dev/sdk 00:21:03.638 [job11] 00:21:03.638 filename=/dev/sdl 00:21:03.638 [job12] 00:21:03.638 filename=/dev/sdm 00:21:03.638 [job13] 00:21:03.638 filename=/dev/sdn 00:21:03.638 [job14] 00:21:03.638 filename=/dev/sdo 00:21:03.638 [job15] 00:21:03.638 filename=/dev/sdp 00:21:03.638 [job16] 00:21:03.638 filename=/dev/sdq 00:21:03.638 [job17] 00:21:03.638 filename=/dev/sdr 00:21:03.638 [job18] 00:21:03.638 filename=/dev/sds 00:21:03.638 [job19] 00:21:03.638 filename=/dev/sdt 00:21:03.638 [job20] 00:21:03.638 filename=/dev/sdu 00:21:03.638 [job21] 00:21:03.638 filename=/dev/sdv 00:21:03.638 [job22] 00:21:03.638 filename=/dev/sdw 00:21:03.638 [job23] 00:21:03.638 filename=/dev/sdx 00:21:03.638 [job24] 00:21:03.638 filename=/dev/sdy 00:21:03.638 [job25] 00:21:03.638 filename=/dev/sdz 00:21:03.638 [job26] 00:21:03.638 filename=/dev/sdaa 00:21:03.638 [job27] 00:21:03.638 filename=/dev/sdab 00:21:03.638 [job28] 00:21:03.638 filename=/dev/sdac 00:21:03.638 [job29] 00:21:03.638 filename=/dev/sdad 00:21:03.638 queue_depth set to 113 (sda) 00:21:03.638 queue_depth set to 113 (sdb) 00:21:03.638 queue_depth set to 113 (sdc) 00:21:03.638 queue_depth set to 113 (sdd) 00:21:03.638 queue_depth set to 113 (sde) 00:21:03.638 queue_depth set to 113 (sdf) 00:21:03.638 queue_depth set to 113 (sdg) 00:21:03.638 queue_depth set to 113 (sdh) 00:21:03.638 queue_depth set to 113 (sdi) 00:21:03.639 queue_depth set to 113 (sdj) 00:21:03.639 queue_depth set to 113 (sdk) 00:21:03.639 queue_depth set to 113 (sdl) 00:21:03.639 queue_depth set to 113 (sdm) 00:21:03.639 queue_depth set to 113 (sdn) 00:21:03.639 queue_depth set to 113 (sdo) 00:21:03.639 queue_depth set to 113 (sdp) 00:21:03.639 queue_depth set to 113 (sdq) 00:21:03.639 queue_depth set to 113 (sdr) 00:21:03.639 queue_depth set to 113 (sds) 00:21:03.639 queue_depth set to 113 (sdt) 00:21:03.639 queue_depth set to 113 (sdu) 00:21:03.639 queue_depth set to 113 (sdv) 00:21:03.639 queue_depth set to 113 (sdw) 00:21:03.639 queue_depth set to 113 (sdx) 00:21:03.639 queue_depth set to 113 (sdy) 00:21:03.639 queue_depth set to 113 (sdz) 00:21:03.639 queue_depth set to 113 (sdaa) 00:21:03.639 queue_depth set to 113 (sdab) 00:21:03.639 queue_depth set to 113 (sdac) 00:21:03.639 queue_depth set to 113 (sdad) 00:21:03.897 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job11: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job12: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job13: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job14: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job15: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job16: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job17: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job18: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job19: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job20: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job21: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job22: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job23: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.897 job24: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.898 job25: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.898 job26: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.898 job27: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.898 job28: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.898 job29: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:03.898 fio-3.35 00:21:03.898 Starting 30 threads 00:21:03.898 [2024-07-23 05:12:03.958384] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.961948] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.965917] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.969676] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.971969] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.974516] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.976736] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.979103] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.981546] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.983986] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.986473] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.989043] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.991410] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.993844] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.996239] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:03.998841] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.001161] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.003512] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.006180] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.008815] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.010949] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.013668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.016115] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.018564] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.020632] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.023208] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.025845] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.030381] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.032371] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:03.898 [2024-07-23 05:12:04.034843] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.118 [2024-07-23 05:12:14.807797] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.118 [2024-07-23 05:12:14.818050] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.118 [2024-07-23 05:12:14.820795] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.118 [2024-07-23 05:12:14.823329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.118 [2024-07-23 05:12:14.825826] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.118 [2024-07-23 05:12:14.828368] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.118 [2024-07-23 05:12:14.830901] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.118 [2024-07-23 05:12:14.833539] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.119 [2024-07-23 05:12:14.835720] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.119 [2024-07-23 05:12:14.838071] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.119 [2024-07-23 05:12:14.840235] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.119 [2024-07-23 05:12:14.842374] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.119 [2024-07-23 05:12:14.844573] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.119 [2024-07-23 05:12:14.846809] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.119 [2024-07-23 05:12:14.848921] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.119 [2024-07-23 05:12:14.851141] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.119 [2024-07-23 05:12:14.853362] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.119 [2024-07-23 05:12:14.855570] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.119 [2024-07-23 05:12:14.860599] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.119 00:21:16.119 job0: (groupid=0, jobs=1): err= 0: pid=92392: Tue Jul 23 05:12:14 2024 00:21:16.119 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10211msec); 0 zone resets 00:21:16.119 slat (usec): min=22, max=1411, avg=61.26, stdev=53.49 00:21:16.119 clat (msec): min=6, max=436, avg=222.49, stdev=28.31 00:21:16.119 lat (msec): min=7, max=436, avg=222.55, stdev=28.30 00:21:16.119 clat percentiles (msec): 00:21:16.119 | 1.00th=[ 93], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.119 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.119 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.119 | 99.00th=[ 330], 99.50th=[ 393], 99.90th=[ 439], 99.95th=[ 439], 00:21:16.119 | 99.99th=[ 439] 00:21:16.119 bw ( KiB/s): min=16384, max=18944, per=3.34%, avg=18380.80, stdev=702.80, samples=20 00:21:16.119 iops : min= 64, max= 74, avg=71.80, stdev= 2.75, samples=20 00:21:16.119 lat (msec) : 10=0.14%, 20=0.14%, 50=0.27%, 100=0.55%, 250=97.14% 00:21:16.119 lat (msec) : 500=1.77% 00:21:16.119 cpu : usr=0.26%, sys=0.28%, ctx=744, majf=0, minf=1 00:21:16.119 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.119 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.119 issued rwts: total=0,733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.119 job1: (groupid=0, jobs=1): err= 0: pid=92393: Tue Jul 23 05:12:14 2024 00:21:16.119 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10191msec); 0 zone resets 00:21:16.119 slat (usec): min=16, max=400, avg=60.52, stdev=20.89 00:21:16.119 clat (msec): min=23, max=415, avg=222.72, stdev=24.62 00:21:16.119 lat (msec): min=23, max=415, avg=222.78, stdev=24.62 00:21:16.119 clat percentiles (msec): 00:21:16.119 | 1.00th=[ 118], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.119 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.119 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.119 | 99.00th=[ 309], 99.50th=[ 372], 99.90th=[ 418], 99.95th=[ 418], 00:21:16.119 | 99.99th=[ 418] 00:21:16.119 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18327.75, stdev=676.56, samples=20 00:21:16.119 iops : min= 64, max= 74, avg=71.55, stdev= 2.65, samples=20 00:21:16.119 lat (msec) : 50=0.27%, 100=0.55%, 250=97.67%, 500=1.50% 00:21:16.119 cpu : usr=0.20%, sys=0.35%, ctx=737, majf=0, minf=1 00:21:16.119 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.119 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.119 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.119 job2: (groupid=0, jobs=1): err= 0: pid=92394: Tue Jul 23 05:12:14 2024 00:21:16.119 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10210msec); 0 zone resets 00:21:16.119 slat (usec): min=25, max=496, avg=75.27, stdev=44.99 00:21:16.119 clat (msec): min=9, max=437, avg=222.47, stdev=28.42 00:21:16.119 lat (msec): min=9, max=438, avg=222.55, stdev=28.43 00:21:16.119 clat percentiles (msec): 00:21:16.119 | 1.00th=[ 92], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.119 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.119 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.119 | 99.00th=[ 330], 99.50th=[ 393], 99.90th=[ 439], 99.95th=[ 439], 00:21:16.119 | 99.99th=[ 439] 00:21:16.119 bw ( KiB/s): min=16384, max=19456, per=3.34%, avg=18380.80, stdev=722.17, samples=20 00:21:16.119 iops : min= 64, max= 76, avg=71.80, stdev= 2.82, samples=20 00:21:16.119 lat (msec) : 10=0.14%, 20=0.14%, 50=0.27%, 100=0.55%, 250=97.14% 00:21:16.119 lat (msec) : 500=1.77% 00:21:16.119 cpu : usr=0.19%, sys=0.38%, ctx=794, majf=0, minf=1 00:21:16.119 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.119 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.119 issued rwts: total=0,733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.119 job3: (groupid=0, jobs=1): err= 0: pid=92395: Tue Jul 23 05:12:14 2024 00:21:16.119 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10198msec); 0 zone resets 00:21:16.119 slat (usec): min=23, max=388, avg=71.88, stdev=41.83 00:21:16.119 clat (msec): min=23, max=424, avg=222.85, stdev=25.31 00:21:16.119 lat (msec): min=23, max=424, avg=222.92, stdev=25.31 00:21:16.119 clat percentiles (msec): 00:21:16.119 | 1.00th=[ 117], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.119 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.119 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.119 | 99.00th=[ 317], 99.50th=[ 380], 99.90th=[ 426], 99.95th=[ 426], 00:21:16.119 | 99.99th=[ 426] 00:21:16.119 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18329.60, stdev=676.80, samples=20 00:21:16.119 iops : min= 64, max= 74, avg=71.60, stdev= 2.64, samples=20 00:21:16.119 lat (msec) : 50=0.27%, 100=0.55%, 250=97.54%, 500=1.64% 00:21:16.119 cpu : usr=0.19%, sys=0.40%, ctx=787, majf=0, minf=1 00:21:16.119 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.119 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.119 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.119 job4: (groupid=0, jobs=1): err= 0: pid=92426: Tue Jul 23 05:12:14 2024 00:21:16.119 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10200msec); 0 zone resets 00:21:16.119 slat (usec): min=16, max=256, avg=58.10, stdev=17.71 00:21:16.119 clat (msec): min=22, max=428, avg=222.91, stdev=25.62 00:21:16.119 lat (msec): min=22, max=428, avg=222.97, stdev=25.62 00:21:16.119 clat percentiles (msec): 00:21:16.119 | 1.00th=[ 117], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.119 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.119 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.119 | 99.00th=[ 321], 99.50th=[ 384], 99.90th=[ 430], 99.95th=[ 430], 00:21:16.119 | 99.99th=[ 430] 00:21:16.119 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18329.60, stdev=676.80, samples=20 00:21:16.119 iops : min= 64, max= 74, avg=71.60, stdev= 2.64, samples=20 00:21:16.119 lat (msec) : 50=0.41%, 100=0.41%, 250=97.54%, 500=1.64% 00:21:16.119 cpu : usr=0.24%, sys=0.25%, ctx=736, majf=0, minf=1 00:21:16.119 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.119 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.119 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.119 job5: (groupid=0, jobs=1): err= 0: pid=92427: Tue Jul 23 05:12:14 2024 00:21:16.119 write: IOPS=72, BW=18.1MiB/s (18.9MB/s)(185MiB/10214msec); 0 zone resets 00:21:16.119 slat (usec): min=23, max=213, avg=59.92, stdev=17.45 00:21:16.119 clat (usec): min=1955, max=440896, avg=221073.99, stdev=33408.97 00:21:16.119 lat (msec): min=2, max=440, avg=221.13, stdev=33.41 00:21:16.119 clat percentiles (msec): 00:21:16.119 | 1.00th=[ 30], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.119 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.119 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.119 | 99.00th=[ 334], 99.50th=[ 397], 99.90th=[ 443], 99.95th=[ 443], 00:21:16.119 | 99.99th=[ 443] 00:21:16.119 bw ( KiB/s): min=16384, max=21547, per=3.36%, avg=18509.05, stdev=978.66, samples=20 00:21:16.119 iops : min= 64, max= 84, avg=72.25, stdev= 3.78, samples=20 00:21:16.119 lat (msec) : 2=0.14%, 4=0.14%, 10=0.27%, 20=0.27%, 50=0.41% 00:21:16.119 lat (msec) : 100=0.54%, 250=96.48%, 500=1.76% 00:21:16.119 cpu : usr=0.19%, sys=0.42%, ctx=745, majf=0, minf=1 00:21:16.119 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.119 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.119 issued rwts: total=0,738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.119 job6: (groupid=0, jobs=1): err= 0: pid=92428: Tue Jul 23 05:12:14 2024 00:21:16.119 write: IOPS=72, BW=18.0MiB/s (18.9MB/s)(184MiB/10217msec); 0 zone resets 00:21:16.119 slat (usec): min=28, max=125, avg=56.00, stdev=14.05 00:21:16.119 clat (msec): min=6, max=440, avg=221.73, stdev=31.20 00:21:16.119 lat (msec): min=6, max=440, avg=221.79, stdev=31.20 00:21:16.119 clat percentiles (msec): 00:21:16.119 | 1.00th=[ 57], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.119 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.119 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.119 | 99.00th=[ 334], 99.50th=[ 393], 99.90th=[ 439], 99.95th=[ 439], 00:21:16.119 | 99.99th=[ 439] 00:21:16.119 bw ( KiB/s): min=16384, max=20480, per=3.35%, avg=18455.70, stdev=837.31, samples=20 00:21:16.120 iops : min= 64, max= 80, avg=72.05, stdev= 3.25, samples=20 00:21:16.120 lat (msec) : 10=0.27%, 20=0.27%, 50=0.41%, 100=0.54%, 250=96.74% 00:21:16.120 lat (msec) : 500=1.77% 00:21:16.120 cpu : usr=0.20%, sys=0.38%, ctx=738, majf=0, minf=1 00:21:16.120 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 issued rwts: total=0,736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.120 job7: (groupid=0, jobs=1): err= 0: pid=92431: Tue Jul 23 05:12:14 2024 00:21:16.120 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10207msec); 0 zone resets 00:21:16.120 slat (usec): min=19, max=484, avg=82.31, stdev=54.66 00:21:16.120 clat (msec): min=16, max=426, avg=222.71, stdev=25.98 00:21:16.120 lat (msec): min=17, max=426, avg=222.79, stdev=25.98 00:21:16.120 clat percentiles (msec): 00:21:16.120 | 1.00th=[ 111], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.120 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.120 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.120 | 99.00th=[ 317], 99.50th=[ 380], 99.90th=[ 426], 99.95th=[ 426], 00:21:16.120 | 99.99th=[ 426] 00:21:16.120 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18355.20, stdev=670.14, samples=20 00:21:16.120 iops : min= 64, max= 74, avg=71.70, stdev= 2.62, samples=20 00:21:16.120 lat (msec) : 20=0.14%, 50=0.27%, 100=0.55%, 250=97.40%, 500=1.64% 00:21:16.120 cpu : usr=0.19%, sys=0.38%, ctx=821, majf=0, minf=1 00:21:16.120 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 issued rwts: total=0,732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.120 job8: (groupid=0, jobs=1): err= 0: pid=92439: Tue Jul 23 05:12:14 2024 00:21:16.120 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10207msec); 0 zone resets 00:21:16.120 slat (usec): min=19, max=503, avg=57.92, stdev=21.62 00:21:16.120 clat (msec): min=16, max=426, avg=222.74, stdev=25.98 00:21:16.120 lat (msec): min=17, max=426, avg=222.80, stdev=25.98 00:21:16.120 clat percentiles (msec): 00:21:16.120 | 1.00th=[ 111], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.120 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.120 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.120 | 99.00th=[ 317], 99.50th=[ 380], 99.90th=[ 426], 99.95th=[ 426], 00:21:16.120 | 99.99th=[ 426] 00:21:16.120 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18355.20, stdev=670.14, samples=20 00:21:16.120 iops : min= 64, max= 74, avg=71.70, stdev= 2.62, samples=20 00:21:16.120 lat (msec) : 20=0.14%, 50=0.27%, 100=0.55%, 250=97.40%, 500=1.64% 00:21:16.120 cpu : usr=0.26%, sys=0.33%, ctx=733, majf=0, minf=1 00:21:16.120 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 issued rwts: total=0,732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.120 job9: (groupid=0, jobs=1): err= 0: pid=92472: Tue Jul 23 05:12:14 2024 00:21:16.120 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10207msec); 0 zone resets 00:21:16.120 slat (usec): min=16, max=713, avg=56.57, stdev=36.68 00:21:16.120 clat (msec): min=12, max=434, avg=222.74, stdev=27.12 00:21:16.120 lat (msec): min=12, max=434, avg=222.80, stdev=27.12 00:21:16.120 clat percentiles (msec): 00:21:16.120 | 1.00th=[ 105], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.120 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.120 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.120 | 99.00th=[ 326], 99.50th=[ 388], 99.90th=[ 435], 99.95th=[ 435], 00:21:16.120 | 99.99th=[ 435] 00:21:16.120 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18355.20, stdev=690.43, samples=20 00:21:16.120 iops : min= 64, max= 74, avg=71.70, stdev= 2.70, samples=20 00:21:16.120 lat (msec) : 20=0.14%, 50=0.27%, 100=0.55%, 250=97.40%, 500=1.64% 00:21:16.120 cpu : usr=0.20%, sys=0.27%, ctx=733, majf=0, minf=1 00:21:16.120 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 issued rwts: total=0,732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.120 job10: (groupid=0, jobs=1): err= 0: pid=92502: Tue Jul 23 05:12:14 2024 00:21:16.120 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10189msec); 0 zone resets 00:21:16.120 slat (usec): min=27, max=170, avg=54.88, stdev=14.69 00:21:16.120 clat (msec): min=23, max=413, avg=222.69, stdev=24.45 00:21:16.120 lat (msec): min=23, max=413, avg=222.74, stdev=24.45 00:21:16.120 clat percentiles (msec): 00:21:16.120 | 1.00th=[ 118], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.120 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.120 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.120 | 99.00th=[ 305], 99.50th=[ 368], 99.90th=[ 414], 99.95th=[ 414], 00:21:16.120 | 99.99th=[ 414] 00:21:16.120 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18327.75, stdev=676.56, samples=20 00:21:16.120 iops : min= 64, max= 74, avg=71.55, stdev= 2.65, samples=20 00:21:16.120 lat (msec) : 50=0.27%, 100=0.55%, 250=97.67%, 500=1.50% 00:21:16.120 cpu : usr=0.23%, sys=0.34%, ctx=736, majf=0, minf=1 00:21:16.120 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.120 job11: (groupid=0, jobs=1): err= 0: pid=92580: Tue Jul 23 05:12:14 2024 00:21:16.120 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10193msec); 0 zone resets 00:21:16.120 slat (usec): min=24, max=315, avg=62.15, stdev=28.51 00:21:16.120 clat (msec): min=24, max=417, avg=222.77, stdev=24.67 00:21:16.120 lat (msec): min=24, max=417, avg=222.83, stdev=24.68 00:21:16.120 clat percentiles (msec): 00:21:16.120 | 1.00th=[ 120], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.120 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.120 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.120 | 99.00th=[ 309], 99.50th=[ 372], 99.90th=[ 418], 99.95th=[ 418], 00:21:16.120 | 99.99th=[ 418] 00:21:16.120 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18329.60, stdev=676.80, samples=20 00:21:16.120 iops : min= 64, max= 74, avg=71.60, stdev= 2.64, samples=20 00:21:16.120 lat (msec) : 50=0.27%, 100=0.55%, 250=97.67%, 500=1.50% 00:21:16.120 cpu : usr=0.23%, sys=0.31%, ctx=762, majf=0, minf=1 00:21:16.120 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.120 job12: (groupid=0, jobs=1): err= 0: pid=92584: Tue Jul 23 05:12:14 2024 00:21:16.120 write: IOPS=71, BW=18.0MiB/s (18.8MB/s)(184MiB/10214msec); 0 zone resets 00:21:16.120 slat (usec): min=25, max=6288, avg=55.79, stdev=231.44 00:21:16.120 clat (msec): min=4, max=440, avg=222.16, stdev=30.03 00:21:16.120 lat (msec): min=9, max=440, avg=222.21, stdev=29.97 00:21:16.120 clat percentiles (msec): 00:21:16.120 | 1.00th=[ 75], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.120 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.120 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.120 | 99.00th=[ 334], 99.50th=[ 397], 99.90th=[ 443], 99.95th=[ 443], 00:21:16.120 | 99.99th=[ 443] 00:21:16.120 bw ( KiB/s): min=16384, max=19494, per=3.34%, avg=18406.40, stdev=715.54, samples=20 00:21:16.120 iops : min= 64, max= 76, avg=71.85, stdev= 2.76, samples=20 00:21:16.120 lat (msec) : 10=0.27%, 20=0.14%, 50=0.41%, 100=0.41%, 250=97.00% 00:21:16.120 lat (msec) : 500=1.77% 00:21:16.120 cpu : usr=0.23%, sys=0.20%, ctx=776, majf=0, minf=1 00:21:16.120 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 issued rwts: total=0,734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.120 job13: (groupid=0, jobs=1): err= 0: pid=92585: Tue Jul 23 05:12:14 2024 00:21:16.120 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10197msec); 0 zone resets 00:21:16.120 slat (usec): min=21, max=223, avg=48.35, stdev=22.54 00:21:16.120 clat (msec): min=22, max=424, avg=222.86, stdev=25.36 00:21:16.120 lat (msec): min=22, max=424, avg=222.91, stdev=25.36 00:21:16.120 clat percentiles (msec): 00:21:16.120 | 1.00th=[ 117], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.120 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.120 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.120 | 99.00th=[ 317], 99.50th=[ 380], 99.90th=[ 426], 99.95th=[ 426], 00:21:16.120 | 99.99th=[ 426] 00:21:16.120 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18331.40, stdev=677.14, samples=20 00:21:16.120 iops : min= 64, max= 74, avg=71.60, stdev= 2.64, samples=20 00:21:16.120 lat (msec) : 50=0.41%, 100=0.41%, 250=97.54%, 500=1.64% 00:21:16.120 cpu : usr=0.20%, sys=0.24%, ctx=758, majf=0, minf=1 00:21:16.120 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.120 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.121 job14: (groupid=0, jobs=1): err= 0: pid=92586: Tue Jul 23 05:12:14 2024 00:21:16.121 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10200msec); 0 zone resets 00:21:16.121 slat (usec): min=12, max=255, avg=47.39, stdev=22.31 00:21:16.121 clat (msec): min=22, max=428, avg=222.92, stdev=25.65 00:21:16.121 lat (msec): min=22, max=428, avg=222.97, stdev=25.65 00:21:16.121 clat percentiles (msec): 00:21:16.121 | 1.00th=[ 117], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.121 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.121 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.121 | 99.00th=[ 321], 99.50th=[ 384], 99.90th=[ 430], 99.95th=[ 430], 00:21:16.121 | 99.99th=[ 430] 00:21:16.121 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18329.60, stdev=676.80, samples=20 00:21:16.121 iops : min= 64, max= 74, avg=71.60, stdev= 2.64, samples=20 00:21:16.121 lat (msec) : 50=0.41%, 100=0.41%, 250=97.54%, 500=1.64% 00:21:16.121 cpu : usr=0.16%, sys=0.27%, ctx=759, majf=0, minf=1 00:21:16.121 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.121 job15: (groupid=0, jobs=1): err= 0: pid=92587: Tue Jul 23 05:12:14 2024 00:21:16.121 write: IOPS=72, BW=18.0MiB/s (18.9MB/s)(184MiB/10218msec); 0 zone resets 00:21:16.121 slat (usec): min=29, max=5703, avg=65.96, stdev=236.58 00:21:16.121 clat (msec): min=2, max=438, avg=221.59, stdev=31.69 00:21:16.121 lat (msec): min=6, max=438, avg=221.66, stdev=31.61 00:21:16.121 clat percentiles (msec): 00:21:16.121 | 1.00th=[ 51], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.121 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.121 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.121 | 99.00th=[ 330], 99.50th=[ 393], 99.90th=[ 439], 99.95th=[ 439], 00:21:16.121 | 99.99th=[ 439] 00:21:16.121 bw ( KiB/s): min=16384, max=20521, per=3.35%, avg=18457.80, stdev=827.26, samples=20 00:21:16.121 iops : min= 64, max= 80, avg=72.05, stdev= 3.22, samples=20 00:21:16.121 lat (msec) : 4=0.14%, 10=0.41%, 20=0.14%, 50=0.27%, 100=0.54% 00:21:16.121 lat (msec) : 250=96.74%, 500=1.77% 00:21:16.121 cpu : usr=0.17%, sys=0.40%, ctx=739, majf=0, minf=1 00:21:16.121 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 issued rwts: total=0,736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.121 job16: (groupid=0, jobs=1): err= 0: pid=92588: Tue Jul 23 05:12:14 2024 00:21:16.121 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10212msec); 0 zone resets 00:21:16.121 slat (usec): min=25, max=1011, avg=50.79, stdev=56.44 00:21:16.121 clat (msec): min=7, max=434, avg=222.50, stdev=28.09 00:21:16.121 lat (msec): min=8, max=434, avg=222.55, stdev=28.06 00:21:16.121 clat percentiles (msec): 00:21:16.121 | 1.00th=[ 93], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.121 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.121 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.121 | 99.00th=[ 326], 99.50th=[ 388], 99.90th=[ 435], 99.95th=[ 435], 00:21:16.121 | 99.99th=[ 435] 00:21:16.121 bw ( KiB/s): min=16384, max=18944, per=3.34%, avg=18380.80, stdev=682.89, samples=20 00:21:16.121 iops : min= 64, max= 74, avg=71.80, stdev= 2.67, samples=20 00:21:16.121 lat (msec) : 10=0.14%, 20=0.14%, 50=0.27%, 100=0.55%, 250=97.27% 00:21:16.121 lat (msec) : 500=1.64% 00:21:16.121 cpu : usr=0.21%, sys=0.21%, ctx=801, majf=0, minf=1 00:21:16.121 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 issued rwts: total=0,733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.121 job17: (groupid=0, jobs=1): err= 0: pid=92593: Tue Jul 23 05:12:14 2024 00:21:16.121 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10212msec); 0 zone resets 00:21:16.121 slat (usec): min=26, max=111, avg=54.91, stdev=13.44 00:21:16.121 clat (msec): min=10, max=434, avg=222.54, stdev=27.85 00:21:16.121 lat (msec): min=10, max=434, avg=222.59, stdev=27.85 00:21:16.121 clat percentiles (msec): 00:21:16.121 | 1.00th=[ 96], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.121 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.121 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.121 | 99.00th=[ 326], 99.50th=[ 388], 99.90th=[ 435], 99.95th=[ 435], 00:21:16.121 | 99.99th=[ 435] 00:21:16.121 bw ( KiB/s): min=16384, max=18944, per=3.34%, avg=18380.80, stdev=682.89, samples=20 00:21:16.121 iops : min= 64, max= 74, avg=71.80, stdev= 2.67, samples=20 00:21:16.121 lat (msec) : 20=0.27%, 50=0.27%, 100=0.55%, 250=97.27%, 500=1.64% 00:21:16.121 cpu : usr=0.21%, sys=0.36%, ctx=736, majf=0, minf=1 00:21:16.121 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 issued rwts: total=0,733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.121 job18: (groupid=0, jobs=1): err= 0: pid=92594: Tue Jul 23 05:12:14 2024 00:21:16.121 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10200msec); 0 zone resets 00:21:16.121 slat (usec): min=24, max=189, avg=54.41, stdev=15.89 00:21:16.121 clat (msec): min=21, max=429, avg=222.90, stdev=25.89 00:21:16.121 lat (msec): min=21, max=429, avg=222.95, stdev=25.89 00:21:16.121 clat percentiles (msec): 00:21:16.121 | 1.00th=[ 115], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.121 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.121 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.121 | 99.00th=[ 321], 99.50th=[ 384], 99.90th=[ 430], 99.95th=[ 430], 00:21:16.121 | 99.99th=[ 430] 00:21:16.121 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18329.60, stdev=735.42, samples=20 00:21:16.121 iops : min= 64, max= 74, avg=71.60, stdev= 2.87, samples=20 00:21:16.121 lat (msec) : 50=0.41%, 100=0.41%, 250=97.54%, 500=1.64% 00:21:16.121 cpu : usr=0.24%, sys=0.31%, ctx=738, majf=0, minf=1 00:21:16.121 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.121 job19: (groupid=0, jobs=1): err= 0: pid=92595: Tue Jul 23 05:12:14 2024 00:21:16.121 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10212msec); 0 zone resets 00:21:16.121 slat (usec): min=19, max=118, avg=53.94, stdev=13.59 00:21:16.121 clat (msec): min=10, max=433, avg=222.57, stdev=27.62 00:21:16.121 lat (msec): min=10, max=433, avg=222.62, stdev=27.63 00:21:16.121 clat percentiles (msec): 00:21:16.121 | 1.00th=[ 97], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.121 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.121 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.121 | 99.00th=[ 326], 99.50th=[ 388], 99.90th=[ 435], 99.95th=[ 435], 00:21:16.121 | 99.99th=[ 435] 00:21:16.121 bw ( KiB/s): min=16384, max=18944, per=3.34%, avg=18380.80, stdev=682.89, samples=20 00:21:16.121 iops : min= 64, max= 74, avg=71.80, stdev= 2.67, samples=20 00:21:16.121 lat (msec) : 20=0.27%, 50=0.27%, 100=0.55%, 250=97.27%, 500=1.64% 00:21:16.121 cpu : usr=0.24%, sys=0.32%, ctx=734, majf=0, minf=1 00:21:16.121 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 issued rwts: total=0,733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.121 job20: (groupid=0, jobs=1): err= 0: pid=92596: Tue Jul 23 05:12:14 2024 00:21:16.121 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10205msec); 0 zone resets 00:21:16.121 slat (usec): min=20, max=5311, avg=62.48, stdev=194.93 00:21:16.121 clat (msec): min=21, max=429, avg=222.91, stdev=25.87 00:21:16.121 lat (msec): min=26, max=429, avg=222.97, stdev=25.82 00:21:16.121 clat percentiles (msec): 00:21:16.121 | 1.00th=[ 115], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.121 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.121 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.121 | 99.00th=[ 321], 99.50th=[ 384], 99.90th=[ 430], 99.95th=[ 430], 00:21:16.121 | 99.99th=[ 430] 00:21:16.121 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18329.60, stdev=696.89, samples=20 00:21:16.121 iops : min= 64, max= 74, avg=71.60, stdev= 2.72, samples=20 00:21:16.121 lat (msec) : 50=0.41%, 100=0.41%, 250=97.54%, 500=1.64% 00:21:16.121 cpu : usr=0.25%, sys=0.32%, ctx=733, majf=0, minf=1 00:21:16.121 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.121 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.121 job21: (groupid=0, jobs=1): err= 0: pid=92597: Tue Jul 23 05:12:14 2024 00:21:16.121 write: IOPS=71, BW=18.0MiB/s (18.8MB/s)(184MiB/10215msec); 0 zone resets 00:21:16.121 slat (usec): min=30, max=157, avg=54.16, stdev=14.17 00:21:16.121 clat (msec): min=8, max=432, avg=222.32, stdev=28.31 00:21:16.121 lat (msec): min=8, max=432, avg=222.38, stdev=28.32 00:21:16.121 clat percentiles (msec): 00:21:16.121 | 1.00th=[ 88], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.121 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.121 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.122 | 99.00th=[ 326], 99.50th=[ 388], 99.90th=[ 435], 99.95th=[ 435], 00:21:16.122 | 99.99th=[ 435] 00:21:16.122 bw ( KiB/s): min=16384, max=19456, per=3.34%, avg=18380.80, stdev=777.37, samples=20 00:21:16.122 iops : min= 64, max= 76, avg=71.80, stdev= 3.04, samples=20 00:21:16.122 lat (msec) : 10=0.14%, 20=0.14%, 50=0.41%, 100=0.41%, 250=97.28% 00:21:16.122 lat (msec) : 500=1.63% 00:21:16.122 cpu : usr=0.23%, sys=0.34%, ctx=734, majf=0, minf=1 00:21:16.122 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 issued rwts: total=0,734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.122 job22: (groupid=0, jobs=1): err= 0: pid=92598: Tue Jul 23 05:12:14 2024 00:21:16.122 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10188msec); 0 zone resets 00:21:16.122 slat (usec): min=23, max=180, avg=52.60, stdev=14.05 00:21:16.122 clat (msec): min=24, max=411, avg=222.67, stdev=24.22 00:21:16.122 lat (msec): min=24, max=411, avg=222.72, stdev=24.22 00:21:16.122 clat percentiles (msec): 00:21:16.122 | 1.00th=[ 120], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.122 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.122 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.122 | 99.00th=[ 305], 99.50th=[ 368], 99.90th=[ 414], 99.95th=[ 414], 00:21:16.122 | 99.99th=[ 414] 00:21:16.122 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18329.55, stdev=676.90, samples=20 00:21:16.122 iops : min= 64, max= 74, avg=71.55, stdev= 2.65, samples=20 00:21:16.122 lat (msec) : 50=0.27%, 100=0.55%, 250=97.67%, 500=1.50% 00:21:16.122 cpu : usr=0.25%, sys=0.30%, ctx=733, majf=0, minf=1 00:21:16.122 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.122 job23: (groupid=0, jobs=1): err= 0: pid=92599: Tue Jul 23 05:12:14 2024 00:21:16.122 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10193msec); 0 zone resets 00:21:16.122 slat (usec): min=16, max=222, avg=46.14, stdev=17.96 00:21:16.122 clat (msec): min=23, max=417, avg=222.77, stdev=24.73 00:21:16.122 lat (msec): min=24, max=417, avg=222.82, stdev=24.74 00:21:16.122 clat percentiles (msec): 00:21:16.122 | 1.00th=[ 118], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.122 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.122 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.122 | 99.00th=[ 309], 99.50th=[ 372], 99.90th=[ 418], 99.95th=[ 418], 00:21:16.122 | 99.99th=[ 418] 00:21:16.122 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18331.40, stdev=677.14, samples=20 00:21:16.122 iops : min= 64, max= 74, avg=71.60, stdev= 2.64, samples=20 00:21:16.122 lat (msec) : 50=0.27%, 100=0.55%, 250=97.67%, 500=1.50% 00:21:16.122 cpu : usr=0.20%, sys=0.23%, ctx=762, majf=0, minf=1 00:21:16.122 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.122 job24: (groupid=0, jobs=1): err= 0: pid=92600: Tue Jul 23 05:12:14 2024 00:21:16.122 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10200msec); 0 zone resets 00:21:16.122 slat (usec): min=19, max=186, avg=46.53, stdev=16.46 00:21:16.122 clat (msec): min=22, max=428, avg=222.93, stdev=25.64 00:21:16.122 lat (msec): min=22, max=428, avg=222.97, stdev=25.64 00:21:16.122 clat percentiles (msec): 00:21:16.122 | 1.00th=[ 117], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.122 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.122 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.122 | 99.00th=[ 321], 99.50th=[ 384], 99.90th=[ 430], 99.95th=[ 430], 00:21:16.122 | 99.99th=[ 430] 00:21:16.122 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18329.60, stdev=676.80, samples=20 00:21:16.122 iops : min= 64, max= 74, avg=71.60, stdev= 2.64, samples=20 00:21:16.122 lat (msec) : 50=0.41%, 100=0.41%, 250=97.54%, 500=1.64% 00:21:16.122 cpu : usr=0.16%, sys=0.27%, ctx=751, majf=0, minf=1 00:21:16.122 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.122 job25: (groupid=0, jobs=1): err= 0: pid=92601: Tue Jul 23 05:12:14 2024 00:21:16.122 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10189msec); 0 zone resets 00:21:16.122 slat (usec): min=18, max=314, avg=55.61, stdev=18.17 00:21:16.122 clat (msec): min=23, max=413, avg=222.69, stdev=24.44 00:21:16.122 lat (msec): min=23, max=413, avg=222.74, stdev=24.44 00:21:16.122 clat percentiles (msec): 00:21:16.122 | 1.00th=[ 118], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.122 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.122 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.122 | 99.00th=[ 305], 99.50th=[ 368], 99.90th=[ 414], 99.95th=[ 414], 00:21:16.122 | 99.99th=[ 414] 00:21:16.122 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18327.75, stdev=676.56, samples=20 00:21:16.122 iops : min= 64, max= 74, avg=71.55, stdev= 2.65, samples=20 00:21:16.122 lat (msec) : 50=0.27%, 100=0.55%, 250=97.67%, 500=1.50% 00:21:16.122 cpu : usr=0.17%, sys=0.40%, ctx=734, majf=0, minf=1 00:21:16.122 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.122 job26: (groupid=0, jobs=1): err= 0: pid=92602: Tue Jul 23 05:12:14 2024 00:21:16.122 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10191msec); 0 zone resets 00:21:16.122 slat (usec): min=16, max=188, avg=56.01, stdev=16.02 00:21:16.122 clat (msec): min=25, max=413, avg=222.72, stdev=24.29 00:21:16.122 lat (msec): min=25, max=413, avg=222.78, stdev=24.29 00:21:16.122 clat percentiles (msec): 00:21:16.122 | 1.00th=[ 121], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.122 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.122 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.122 | 99.00th=[ 305], 99.50th=[ 368], 99.90th=[ 414], 99.95th=[ 414], 00:21:16.122 | 99.99th=[ 414] 00:21:16.122 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18327.75, stdev=676.56, samples=20 00:21:16.122 iops : min= 64, max= 74, avg=71.55, stdev= 2.65, samples=20 00:21:16.122 lat (msec) : 50=0.27%, 100=0.55%, 250=97.67%, 500=1.50% 00:21:16.122 cpu : usr=0.25%, sys=0.33%, ctx=736, majf=0, minf=1 00:21:16.122 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=97.9%, 32=0.0%, >=64=0.0% 00:21:16.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 issued rwts: total=0,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.122 job27: (groupid=0, jobs=1): err= 0: pid=92603: Tue Jul 23 05:12:14 2024 00:21:16.122 write: IOPS=72, BW=18.1MiB/s (19.0MB/s)(185MiB/10220msec); 0 zone resets 00:21:16.122 slat (usec): min=21, max=200, avg=50.81, stdev=20.20 00:21:16.122 clat (usec): min=1767, max=437873, avg=220915.03, stdev=33617.70 00:21:16.122 lat (usec): min=1827, max=437928, avg=220965.85, stdev=33619.97 00:21:16.122 clat percentiles (msec): 00:21:16.122 | 1.00th=[ 27], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.122 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.122 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.122 | 99.00th=[ 330], 99.50th=[ 393], 99.90th=[ 439], 99.95th=[ 439], 00:21:16.122 | 99.99th=[ 439] 00:21:16.122 bw ( KiB/s): min=16384, max=22060, per=3.36%, avg=18509.10, stdev=1111.94, samples=20 00:21:16.122 iops : min= 64, max= 86, avg=72.25, stdev= 4.30, samples=20 00:21:16.122 lat (msec) : 2=0.14%, 4=0.14%, 10=0.41%, 20=0.14%, 50=0.54% 00:21:16.122 lat (msec) : 100=0.41%, 250=96.48%, 500=1.76% 00:21:16.122 cpu : usr=0.21%, sys=0.23%, ctx=784, majf=0, minf=1 00:21:16.122 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 issued rwts: total=0,739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.122 job28: (groupid=0, jobs=1): err= 0: pid=92604: Tue Jul 23 05:12:14 2024 00:21:16.122 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10207msec); 0 zone resets 00:21:16.122 slat (usec): min=15, max=623, avg=52.97, stdev=34.96 00:21:16.122 clat (msec): min=17, max=426, avg=222.75, stdev=25.93 00:21:16.122 lat (msec): min=17, max=426, avg=222.81, stdev=25.93 00:21:16.122 clat percentiles (msec): 00:21:16.122 | 1.00th=[ 112], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.122 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.122 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.122 | 99.00th=[ 317], 99.50th=[ 380], 99.90th=[ 426], 99.95th=[ 426], 00:21:16.122 | 99.99th=[ 426] 00:21:16.122 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18355.20, stdev=670.14, samples=20 00:21:16.122 iops : min= 64, max= 74, avg=71.70, stdev= 2.62, samples=20 00:21:16.122 lat (msec) : 20=0.14%, 50=0.27%, 100=0.55%, 250=97.40%, 500=1.64% 00:21:16.122 cpu : usr=0.19%, sys=0.24%, ctx=788, majf=0, minf=1 00:21:16.122 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.122 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.123 issued rwts: total=0,732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.123 job29: (groupid=0, jobs=1): err= 0: pid=92605: Tue Jul 23 05:12:14 2024 00:21:16.123 write: IOPS=71, BW=17.9MiB/s (18.8MB/s)(183MiB/10210msec); 0 zone resets 00:21:16.123 slat (usec): min=26, max=4919, avg=66.15, stdev=212.23 00:21:16.123 clat (msec): min=16, max=428, avg=222.75, stdev=26.19 00:21:16.123 lat (msec): min=19, max=428, avg=222.81, stdev=26.16 00:21:16.123 clat percentiles (msec): 00:21:16.123 | 1.00th=[ 110], 5.00th=[ 218], 10.00th=[ 218], 20.00th=[ 218], 00:21:16.123 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 222], 00:21:16.123 | 70.00th=[ 222], 80.00th=[ 226], 90.00th=[ 243], 95.00th=[ 247], 00:21:16.123 | 99.00th=[ 321], 99.50th=[ 384], 99.90th=[ 430], 99.95th=[ 430], 00:21:16.123 | 99.99th=[ 430] 00:21:16.123 bw ( KiB/s): min=16384, max=18944, per=3.33%, avg=18331.40, stdev=735.73, samples=20 00:21:16.123 iops : min= 64, max= 74, avg=71.60, stdev= 2.87, samples=20 00:21:16.123 lat (msec) : 20=0.14%, 50=0.27%, 100=0.55%, 250=97.40%, 500=1.64% 00:21:16.123 cpu : usr=0.24%, sys=0.31%, ctx=737, majf=0, minf=1 00:21:16.123 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=98.0%, 32=0.0%, >=64=0.0% 00:21:16.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.123 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.123 issued rwts: total=0,732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:16.123 00:21:16.123 Run status group 0 (all jobs): 00:21:16.123 WRITE: bw=538MiB/s (564MB/s), 17.9MiB/s-18.1MiB/s (18.8MB/s-19.0MB/s), io=5494MiB (5761MB), run=10188-10220msec 00:21:16.123 00:21:16.123 Disk stats (read/write): 00:21:16.123 sda: ios=48/725, merge=0/0, ticks=94/159860, in_queue=159954, util=95.06% 00:21:16.123 sdb: ios=48/721, merge=0/0, ticks=96/159391, in_queue=159488, util=94.91% 00:21:16.123 sdc: ios=48/725, merge=0/0, ticks=90/159827, in_queue=159916, util=95.36% 00:21:16.123 sdd: ios=48/722, merge=0/0, ticks=123/159596, in_queue=159719, util=95.32% 00:21:16.123 sde: ios=48/722, merge=0/0, ticks=132/159590, in_queue=159722, util=95.64% 00:21:16.123 sdf: ios=48/731, merge=0/0, ticks=127/160102, in_queue=160228, util=95.96% 00:21:16.123 sdg: ios=40/729, merge=0/0, ticks=125/160157, in_queue=160282, util=95.95% 00:21:16.123 sdh: ios=25/723, merge=0/0, ticks=94/159708, in_queue=159802, util=95.78% 00:21:16.123 sdi: ios=20/723, merge=0/0, ticks=106/159728, in_queue=159834, util=95.82% 00:21:16.123 sdj: ios=0/724, merge=0/0, ticks=0/159832, in_queue=159832, util=95.88% 00:21:16.123 sdk: ios=0/721, merge=0/0, ticks=0/159414, in_queue=159414, util=95.77% 00:21:16.123 sdl: ios=0/721, merge=0/0, ticks=0/159392, in_queue=159393, util=96.15% 00:21:16.123 sdm: ios=0/727, merge=0/0, ticks=0/159953, in_queue=159953, util=96.55% 00:21:16.123 sdn: ios=0/722, merge=0/0, ticks=0/159557, in_queue=159556, util=96.50% 00:21:16.123 sdo: ios=0/722, merge=0/0, ticks=0/159544, in_queue=159544, util=96.56% 00:21:16.123 sdp: ios=0/729, merge=0/0, ticks=0/160082, in_queue=160082, util=97.09% 00:21:16.123 sdq: ios=0/725, merge=0/0, ticks=0/159836, in_queue=159836, util=97.14% 00:21:16.123 sdr: ios=0/725, merge=0/0, ticks=0/159923, in_queue=159922, util=97.38% 00:21:16.123 sds: ios=0/722, merge=0/0, ticks=0/159571, in_queue=159571, util=97.38% 00:21:16.123 sdt: ios=0/725, merge=0/0, ticks=0/159959, in_queue=159958, util=97.76% 00:21:16.123 sdu: ios=0/722, merge=0/0, ticks=0/159569, in_queue=159569, util=97.67% 00:21:16.123 sdv: ios=0/726, merge=0/0, ticks=0/160015, in_queue=160015, util=98.00% 00:21:16.123 sdw: ios=0/721, merge=0/0, ticks=0/159433, in_queue=159433, util=97.78% 00:21:16.123 sdx: ios=0/721, merge=0/0, ticks=0/159363, in_queue=159363, util=97.97% 00:21:16.123 sdy: ios=0/722, merge=0/0, ticks=0/159561, in_queue=159560, util=98.08% 00:21:16.123 sdz: ios=0/721, merge=0/0, ticks=0/159411, in_queue=159411, util=98.08% 00:21:16.123 sdaa: ios=0/721, merge=0/0, ticks=0/159439, in_queue=159440, util=98.27% 00:21:16.123 sdab: ios=0/731, merge=0/0, ticks=0/159971, in_queue=159971, util=98.66% 00:21:16.123 sdac: ios=0/723, merge=0/0, ticks=0/159702, in_queue=159701, util=98.59% 00:21:16.123 sdad: ios=0/723, merge=0/0, ticks=0/159702, in_queue=159702, util=98.81% 00:21:16.123 [2024-07-23 05:12:14.866697] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.123 [2024-07-23 05:12:14.869355] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.123 [2024-07-23 05:12:14.871744] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.123 [2024-07-23 05:12:14.873994] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.123 [2024-07-23 05:12:14.876048] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.123 05:12:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@79 -- # sync 00:21:16.123 [2024-07-23 05:12:14.878239] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.123 [2024-07-23 05:12:14.880956] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.123 [2024-07-23 05:12:14.883594] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.123 [2024-07-23 05:12:14.886460] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.123 [2024-07-23 05:12:14.889278] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.123 [2024-07-23 05:12:14.892444] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:16.123 05:12:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:21:16.123 05:12:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@83 -- # rm -f 00:21:16.123 Cleaning up iSCSI connection 00:21:16.123 05:12:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@84 -- # iscsicleanup 00:21:16.123 05:12:14 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:21:16.123 05:12:14 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:21:16.123 Logging out of session [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:21:16.123 Logging out of session [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:21:16.124 Logging out of session [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:21:16.124 Logout of [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:21:16.124 Logout of [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:21:16.124 05:12:15 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:21:16.124 05:12:15 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@983 -- # rm -rf 00:21:16.124 INFO: Removing lvol bdevs 00:21:16.124 05:12:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@85 -- # remove_backends 00:21:16.124 05:12:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@22 -- # echo 'INFO: Removing lvol bdevs' 00:21:16.124 05:12:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # seq 1 30 00:21:16.124 05:12:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:16.124 05:12:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_1 00:21:16.124 05:12:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_1 00:21:16.124 [2024-07-23 05:12:15.968842] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (22dd9acd-af05-4771-9a1a-fb317ca57322) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:16.124 INFO: lvol bdev lvs0/lbd_1 removed 00:21:16.124 05:12:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_1 removed' 00:21:16.124 05:12:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:16.124 05:12:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_2 00:21:16.124 05:12:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_2 00:21:16.124 [2024-07-23 05:12:16.244930] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (f1344b15-5e21-457a-be45-4341661ba9b5) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:16.124 INFO: lvol bdev lvs0/lbd_2 removed 00:21:16.124 05:12:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_2 removed' 00:21:16.124 05:12:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:16.124 05:12:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_3 00:21:16.124 05:12:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_3 00:21:16.382 [2024-07-23 05:12:16.525032] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (96dde8b1-e11f-4aa0-8cdc-a9f432cb4a51) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:16.382 INFO: lvol bdev lvs0/lbd_3 removed 00:21:16.382 05:12:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_3 removed' 00:21:16.382 05:12:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:16.382 05:12:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_4 00:21:16.382 05:12:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_4 00:21:16.640 [2024-07-23 05:12:16.777133] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (5ec3b442-56f1-49a2-86e7-ed73e22f198e) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:16.640 INFO: lvol bdev lvs0/lbd_4 removed 00:21:16.640 05:12:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_4 removed' 00:21:16.640 05:12:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:16.640 05:12:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_5 00:21:16.640 05:12:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_5 00:21:16.899 [2024-07-23 05:12:17.061250] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0977cbca-d27c-44cb-814c-b494bc065668) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:16.899 INFO: lvol bdev lvs0/lbd_5 removed 00:21:16.899 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_5 removed' 00:21:16.899 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:16.899 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_6 00:21:16.899 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_6 00:21:17.157 [2024-07-23 05:12:17.305644] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (552501b5-f707-4323-98c1-458f6960b95b) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:17.157 INFO: lvol bdev lvs0/lbd_6 removed 00:21:17.157 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_6 removed' 00:21:17.157 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:17.157 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_7 00:21:17.157 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_7 00:21:17.416 [2024-07-23 05:12:17.525744] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (33a8293b-8421-4de4-ad2d-2beaa80065ff) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:17.416 INFO: lvol bdev lvs0/lbd_7 removed 00:21:17.416 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_7 removed' 00:21:17.416 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:17.416 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_8 00:21:17.416 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_8 00:21:17.674 [2024-07-23 05:12:17.753861] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (20ee0042-72a7-461c-8950-826f4ed044c9) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:17.674 INFO: lvol bdev lvs0/lbd_8 removed 00:21:17.674 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_8 removed' 00:21:17.674 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:17.674 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_9 00:21:17.674 05:12:17 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_9 00:21:17.933 [2024-07-23 05:12:17.985956] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (818f2c74-abf7-48b4-a10f-1aaa6d8f9306) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:17.933 INFO: lvol bdev lvs0/lbd_9 removed 00:21:17.933 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_9 removed' 00:21:17.933 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:17.933 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_10 00:21:17.933 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_10 00:21:18.191 [2024-07-23 05:12:18.210059] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (5da7b3e1-8d86-4bee-a3eb-f4c5e5a5e642) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:18.191 INFO: lvol bdev lvs0/lbd_10 removed 00:21:18.191 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_10 removed' 00:21:18.191 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:18.191 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_11 00:21:18.191 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_11 00:21:18.451 [2024-07-23 05:12:18.430133] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (01c84ac6-4d0f-4038-bf61-1a210cd9979b) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:18.451 INFO: lvol bdev lvs0/lbd_11 removed 00:21:18.451 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_11 removed' 00:21:18.451 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:18.451 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_12 00:21:18.451 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_12 00:21:18.451 [2024-07-23 05:12:18.662270] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (a707c1e4-dae5-4a15-86ca-835f78cb64a5) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:18.710 INFO: lvol bdev lvs0/lbd_12 removed 00:21:18.710 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_12 removed' 00:21:18.710 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:18.710 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_13 00:21:18.710 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_13 00:21:18.710 [2024-07-23 05:12:18.894375] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b9a5e644-297d-42fe-94dd-11dc863a23da) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:18.710 INFO: lvol bdev lvs0/lbd_13 removed 00:21:18.710 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_13 removed' 00:21:18.710 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:18.710 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_14 00:21:18.710 05:12:18 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_14 00:21:18.968 [2024-07-23 05:12:19.114449] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (79c74604-49bb-4f1e-9133-2047fd540323) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:18.968 INFO: lvol bdev lvs0/lbd_14 removed 00:21:18.968 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_14 removed' 00:21:18.968 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:18.968 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_15 00:21:18.968 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_15 00:21:19.227 [2024-07-23 05:12:19.378640] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (db2e837a-2a2c-4bdc-9940-6d77bf865b0b) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:19.227 INFO: lvol bdev lvs0/lbd_15 removed 00:21:19.227 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_15 removed' 00:21:19.227 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:19.227 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_16 00:21:19.227 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_16 00:21:19.485 [2024-07-23 05:12:19.602733] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (5384cb09-ed26-47c3-8dad-11e8c316ce00) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:19.485 INFO: lvol bdev lvs0/lbd_16 removed 00:21:19.485 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_16 removed' 00:21:19.485 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:19.485 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_17 00:21:19.485 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_17 00:21:19.743 [2024-07-23 05:12:19.886851] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (60db7383-cb97-4f0a-9be1-2b88ce1fa264) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:19.743 INFO: lvol bdev lvs0/lbd_17 removed 00:21:19.743 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_17 removed' 00:21:19.743 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:19.743 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_18 00:21:19.743 05:12:19 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_18 00:21:20.001 [2024-07-23 05:12:20.154977] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b8884362-7314-43ec-951d-c3a8e60bfdd2) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:20.001 INFO: lvol bdev lvs0/lbd_18 removed 00:21:20.001 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_18 removed' 00:21:20.001 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:20.001 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_19 00:21:20.001 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_19 00:21:20.307 [2024-07-23 05:12:20.387063] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (58765a23-b07f-408e-b89b-4e20397e7523) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:20.307 INFO: lvol bdev lvs0/lbd_19 removed 00:21:20.307 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_19 removed' 00:21:20.307 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:20.307 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_20 00:21:20.307 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_20 00:21:20.565 [2024-07-23 05:12:20.635196] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (3323802a-d06e-4b55-9ce0-198e942b23f2) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:20.565 INFO: lvol bdev lvs0/lbd_20 removed 00:21:20.565 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_20 removed' 00:21:20.565 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:20.565 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_21 00:21:20.565 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_21 00:21:20.826 [2024-07-23 05:12:20.926695] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (8c6792b7-35a8-4796-85ec-faa22a308af8) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:20.826 INFO: lvol bdev lvs0/lbd_21 removed 00:21:20.826 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_21 removed' 00:21:20.826 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:20.826 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_22 00:21:20.826 05:12:20 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_22 00:21:21.085 [2024-07-23 05:12:21.158765] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (40fc434b-4cee-406e-be56-4612073f3af5) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:21.085 INFO: lvol bdev lvs0/lbd_22 removed 00:21:21.085 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_22 removed' 00:21:21.085 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:21.085 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_23 00:21:21.085 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_23 00:21:21.344 [2024-07-23 05:12:21.386878] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (ddeb4272-0cd0-41a3-b8bb-322ffaaf7739) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:21.344 INFO: lvol bdev lvs0/lbd_23 removed 00:21:21.344 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_23 removed' 00:21:21.344 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:21.344 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_24 00:21:21.344 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_24 00:21:21.603 [2024-07-23 05:12:21.666968] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (78d5c641-7499-4e1c-a6b1-ecfb121003af) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:21.603 INFO: lvol bdev lvs0/lbd_24 removed 00:21:21.603 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_24 removed' 00:21:21.603 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:21.603 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_25 00:21:21.603 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_25 00:21:21.862 [2024-07-23 05:12:21.954597] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7a63479e-3601-4e88-ab2b-5538b8e82c7e) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:21.862 INFO: lvol bdev lvs0/lbd_25 removed 00:21:21.862 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_25 removed' 00:21:21.862 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:21.862 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_26 00:21:21.862 05:12:21 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_26 00:21:22.120 [2024-07-23 05:12:22.186705] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (380e8e46-d8eb-4366-8a3f-2b6392b0bfb7) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:22.120 INFO: lvol bdev lvs0/lbd_26 removed 00:21:22.120 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_26 removed' 00:21:22.120 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:22.120 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_27 00:21:22.120 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_27 00:21:22.378 [2024-07-23 05:12:22.462873] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (cb760e08-0c01-463d-819a-77592ae9aba9) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:22.378 INFO: lvol bdev lvs0/lbd_27 removed 00:21:22.379 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_27 removed' 00:21:22.379 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:22.379 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_28 00:21:22.379 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_28 00:21:22.642 [2024-07-23 05:12:22.694959] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (1fa361ba-a00b-4ac8-80fe-0e3f9b0ab3bd) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:22.642 INFO: lvol bdev lvs0/lbd_28 removed 00:21:22.642 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_28 removed' 00:21:22.642 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:22.642 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_29 00:21:22.642 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_29 00:21:22.900 [2024-07-23 05:12:22.927035] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (be7dc669-6f49-4674-b307-9e90a526f865) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:22.900 INFO: lvol bdev lvs0/lbd_29 removed 00:21:22.900 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_29 removed' 00:21:22.900 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:22.900 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_30 00:21:22.900 05:12:22 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_30 00:21:23.168 [2024-07-23 05:12:23.163148] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b5bf3155-3fa9-4474-9539-bb998dca6dc7) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:23.168 INFO: lvol bdev lvs0/lbd_30 removed 00:21:23.168 05:12:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_30 removed' 00:21:23.168 05:12:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@28 -- # sleep 1 00:21:24.113 INFO: Removing lvol stores 00:21:24.113 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@30 -- # echo 'INFO: Removing lvol stores' 00:21:24.113 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:21:24.371 INFO: lvol store lvs0 removed 00:21:24.371 INFO: Removing NVMe 00:21:24.371 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@32 -- # echo 'INFO: lvol store lvs0 removed' 00:21:24.371 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@34 -- # echo 'INFO: Removing NVMe' 00:21:24.371 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:24.630 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@37 -- # return 0 00:21:24.630 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@86 -- # killprocess 90723 00:21:24.630 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 90723 ']' 00:21:24.630 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@952 -- # kill -0 90723 00:21:24.630 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # uname 00:21:24.630 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.630 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90723 00:21:24.630 killing process with pid 90723 00:21:24.630 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:24.630 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:24.630 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90723' 00:21:24.630 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@967 -- # kill 90723 00:21:24.630 05:12:24 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@972 -- # wait 90723 00:21:25.203 05:12:25 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@87 -- # iscsitestfini 00:21:25.203 05:12:25 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:21:25.203 00:21:25.203 real 0m49.458s 00:21:25.203 user 1m1.932s 00:21:25.203 sys 0m13.290s 00:21:25.203 05:12:25 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:25.203 05:12:25 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:25.203 ************************************ 00:21:25.203 END TEST iscsi_tgt_multiconnection 00:21:25.203 ************************************ 00:21:25.203 05:12:25 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:21:25.203 05:12:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@46 -- # '[' 1 -eq 1 ']' 00:21:25.203 05:12:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@47 -- # run_test iscsi_tgt_ext4test /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:21:25.203 05:12:25 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:25.203 05:12:25 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:25.203 05:12:25 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:21:25.203 ************************************ 00:21:25.203 START TEST iscsi_tgt_ext4test 00:21:25.203 ************************************ 00:21:25.203 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:21:25.203 * Looking for test storage... 00:21:25.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test 00:21:25.203 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:25.203 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:25.203 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:25.203 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:25.203 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:25.203 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@24 -- # iscsitestinit 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@28 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@29 -- # node_base=iqn.2013-06.com.intel.ch.spdk 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@31 -- # timing_enter start_iscsi_tgt 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@34 -- # pid=93157 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@33 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:21:25.204 Process pid: 93157 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@35 -- # echo 'Process pid: 93157' 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@37 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@39 -- # waitforlisten 93157 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@829 -- # '[' -z 93157 ']' 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.204 05:12:25 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:21:25.204 [2024-07-23 05:12:25.365175] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:21:25.204 [2024-07-23 05:12:25.365269] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93157 ] 00:21:25.462 [2024-07-23 05:12:25.495748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.462 [2024-07-23 05:12:25.588751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.399 05:12:26 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.399 05:12:26 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@862 -- # return 0 00:21:26.399 05:12:26 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 4 -b iqn.2013-06.com.intel.ch.spdk 00:21:26.399 05:12:26 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:26.966 05:12:26 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:21:26.966 05:12:26 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:27.224 05:12:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 512 4096 --name Malloc0 00:21:27.505 Malloc0 00:21:27.505 iscsi_tgt is listening. Running tests... 00:21:27.505 05:12:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@44 -- # echo 'iscsi_tgt is listening. Running tests...' 00:21:27.505 05:12:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@46 -- # timing_exit start_iscsi_tgt 00:21:27.505 05:12:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.505 05:12:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:21:27.763 05:12:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:21:28.023 05:12:28 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:21:28.282 05:12:28 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_create Malloc0 00:21:28.282 true 00:21:28.282 05:12:28 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target0 Target0_alias EE_Malloc0:0 1:2 64 -d 00:21:28.541 05:12:28 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@55 -- # sleep 1 00:21:29.916 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@57 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:21:29.917 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target0 00:21:29.917 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@58 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:21:29.917 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:21:29.917 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:21:29.917 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@59 -- # waitforiscsidevices 1 00:21:29.917 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:21:29.917 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:29.917 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:29.917 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:29.917 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:29.917 [2024-07-23 05:12:29.765412] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:29.917 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:21:29.917 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:21:29.917 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:21:29.917 Test error injection 00:21:29.917 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@61 -- # echo 'Test error injection' 00:21:29.917 05:12:29 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 all failure -n 1000 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # iscsiadm -m session -P 3 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # grep 'Attached scsi disk' 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # awk '{print $4}' 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # head -n1 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # dev=sda 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@65 -- # waitforfile /dev/sda 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1265 -- # local i=0 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1276 -- # return 0 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@66 -- # make_filesystem ext4 /dev/sda 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@924 -- # local fstype=ext4 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local i=0 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local force 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@930 -- # force=-F 00:21:29.917 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:29.917 mke2fs 1.46.5 (30-Dec-2021) 00:21:30.436 Discarding device blocks: 0/131072 done 00:21:30.436 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:30.436 Filesystem UUID: 18971788-f429-4be7-8eb9-883cb44a1e1b 00:21:30.436 Superblock backups stored on blocks: 00:21:30.436 32768, 98304 00:21:30.436 00:21:30.436 Allocating group tables: 0/4Warning: could not erase sector 2: Input/output error 00:21:30.436 done 00:21:30.436 Warning: could not read block 0: Input/output error 00:21:30.693 Warning: could not erase sector 0: Input/output error 00:21:30.693 Writing inode tables: 0/4 done 00:21:30.693 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:30.693 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 0 -ge 15 ']' 00:21:30.693 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=1 00:21:30.693 05:12:30 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:30.693 [2024-07-23 05:12:30.813355] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:31.627 05:12:31 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:31.627 mke2fs 1.46.5 (30-Dec-2021) 00:21:31.885 Discarding device blocks: 0/131072 done 00:21:32.144 Warning: could not erase sector 2: Input/output error 00:21:32.144 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:32.144 Filesystem UUID: aa727a8a-fd16-4547-8c5c-0fd67abe4d4d 00:21:32.144 Superblock backups stored on blocks: 00:21:32.144 32768, 98304 00:21:32.144 00:21:32.144 Allocating group tables: 0/4 done 00:21:32.144 Warning: could not read block 0: Input/output error 00:21:32.144 Warning: could not erase sector 0: Input/output error 00:21:32.144 Writing inode tables: 0/4 done 00:21:32.402 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:32.402 05:12:32 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 1 -ge 15 ']' 00:21:32.402 05:12:32 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=2 00:21:32.402 05:12:32 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:32.402 [2024-07-23 05:12:32.397378] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:33.335 05:12:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:33.335 mke2fs 1.46.5 (30-Dec-2021) 00:21:33.593 Discarding device blocks: 0/131072 done 00:21:33.593 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:33.593 Filesystem UUID: dd542b9a-beef-4e6b-a0fe-ef23dc2c271f 00:21:33.593 Superblock backups stored on blocks: 00:21:33.593 32768, 98304 00:21:33.593 00:21:33.593 Allocating group tables: 0/4 done 00:21:33.593 Warning: could not erase sector 2: Input/output error 00:21:33.851 Warning: could not read block 0: Input/output error 00:21:33.851 Warning: could not erase sector 0: Input/output error 00:21:33.851 Writing inode tables: 0/4 done 00:21:33.851 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:33.851 05:12:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 2 -ge 15 ']' 00:21:33.851 05:12:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=3 00:21:33.851 05:12:33 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:33.851 [2024-07-23 05:12:33.983149] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:34.786 05:12:34 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:34.786 mke2fs 1.46.5 (30-Dec-2021) 00:21:35.350 Discarding device blocks: 0/131072 done 00:21:35.350 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:35.350 Filesystem UUID: 019426e2-631f-4a6c-adf1-d5224ac61ae3 00:21:35.350 Superblock backups stored on blocks: 00:21:35.350 32768, 98304 00:21:35.350 00:21:35.350 Allocating group tables: 0/4 done 00:21:35.350 Warning: could not erase sector 2: Input/output error 00:21:35.350 Warning: could not read block 0: Input/output error 00:21:35.609 Warning: could not erase sector 0: Input/output error 00:21:35.609 Writing inode tables: 0/4 done 00:21:35.609 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:35.609 05:12:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 3 -ge 15 ']' 00:21:35.609 05:12:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=4 00:21:35.609 05:12:35 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:35.609 [2024-07-23 05:12:35.677761] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:36.547 05:12:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:36.547 mke2fs 1.46.5 (30-Dec-2021) 00:21:36.806 Discarding device blocks: 0/131072 done 00:21:36.806 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:36.806 Filesystem UUID: a2268576-2c0f-4fb6-a53a-c29928430c32 00:21:36.806 Superblock backups stored on blocks: 00:21:36.806 32768, 98304 00:21:36.806 00:21:36.806 Allocating group tables: 0/4Warning: could not erase sector 2: Input/output error 00:21:36.806 done 00:21:37.066 Warning: could not read block 0: Input/output error 00:21:37.066 Warning: could not erase sector 0: Input/output error 00:21:37.066 Writing inode tables: 0/4 done 00:21:37.066 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:37.066 05:12:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 4 -ge 15 ']' 00:21:37.066 05:12:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=5 00:21:37.066 05:12:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:37.066 [2024-07-23 05:12:37.265528] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:38.445 05:12:38 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:38.445 mke2fs 1.46.5 (30-Dec-2021) 00:21:38.445 Discarding device blocks: 0/131072 done 00:21:38.445 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:38.445 Filesystem UUID: c2606709-8dc2-4633-b109-79d4f672144e 00:21:38.445 Superblock backups stored on blocks: 00:21:38.445 32768, 98304 00:21:38.445 00:21:38.445 Allocating group tables: 0/4 done 00:21:38.445 Warning: could not erase sector 2: Input/output error 00:21:38.703 Warning: could not read block 0: Input/output error 00:21:38.703 Warning: could not erase sector 0: Input/output error 00:21:38.703 Writing inode tables: 0/4 done 00:21:38.703 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:38.703 05:12:38 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 5 -ge 15 ']' 00:21:38.703 05:12:38 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=6 00:21:38.703 05:12:38 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:38.703 [2024-07-23 05:12:38.850419] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:39.640 05:12:39 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:39.640 mke2fs 1.46.5 (30-Dec-2021) 00:21:39.901 Discarding device blocks: 0/131072 done 00:21:40.171 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:40.171 Filesystem UUID: d184489f-245b-4bd5-95f9-af9f464e79f8 00:21:40.171 Superblock backups stored on blocks: 00:21:40.171 32768, 98304 00:21:40.171 00:21:40.171 Allocating group tables: 0/4 done 00:21:40.171 Warning: could not erase sector 2: Input/output error 00:21:40.171 Warning: could not read block 0: Input/output error 00:21:40.430 Warning: could not erase sector 0: Input/output error 00:21:40.430 Writing inode tables: 0/4 done 00:21:40.430 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:40.430 05:12:40 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 6 -ge 15 ']' 00:21:40.430 05:12:40 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=7 00:21:40.430 [2024-07-23 05:12:40.506945] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:40.430 05:12:40 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:41.364 05:12:41 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:41.364 mke2fs 1.46.5 (30-Dec-2021) 00:21:41.622 Discarding device blocks: 0/131072 done 00:21:41.885 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:41.885 Filesystem UUID: bc78a3b9-4693-4404-94c3-7cb62434682b 00:21:41.885 Superblock backups stored on blocks: 00:21:41.885 32768, 98304 00:21:41.885 00:21:41.885 Allocating group tables: 0/4 done 00:21:41.885 Warning: could not erase sector 2: Input/output error 00:21:41.885 Warning: could not read block 0: Input/output error 00:21:41.885 Warning: could not erase sector 0: Input/output error 00:21:41.885 Writing inode tables: 0/4 done 00:21:41.885 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:41.885 05:12:42 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 7 -ge 15 ']' 00:21:41.885 [2024-07-23 05:12:42.093696] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:41.885 05:12:42 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=8 00:21:41.885 05:12:42 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:43.268 05:12:43 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:43.268 mke2fs 1.46.5 (30-Dec-2021) 00:21:43.268 Discarding device blocks: 0/131072 done 00:21:43.268 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:43.268 Filesystem UUID: 94bf7cd4-be79-4fae-852c-9de48f81ad8c 00:21:43.268 Superblock backups stored on blocks: 00:21:43.268 32768, 98304 00:21:43.268 00:21:43.268 Allocating group tables: 0/4 done 00:21:43.268 Warning: could not erase sector 2: Input/output error 00:21:43.525 Warning: could not read block 0: Input/output error 00:21:43.525 Warning: could not erase sector 0: Input/output error 00:21:43.525 Writing inode tables: 0/4 done 00:21:43.525 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:43.525 05:12:43 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 8 -ge 15 ']' 00:21:43.525 05:12:43 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=9 00:21:43.525 05:12:43 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:43.525 [2024-07-23 05:12:43.683288] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:44.900 05:12:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:44.900 mke2fs 1.46.5 (30-Dec-2021) 00:21:44.900 Discarding device blocks: 0/131072 done 00:21:44.900 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:44.900 Filesystem UUID: 9c68d7e3-8b10-4e62-93e2-9dbbdf182554 00:21:44.900 Superblock backups stored on blocks: 00:21:44.900 32768, 98304 00:21:44.900 00:21:44.900 Allocating group tables: 0/4 done 00:21:44.900 Writing inode tables: 0/4 done 00:21:44.900 Creating journal (4096 blocks): done 00:21:44.900 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:21:44.900 05:12:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 9 -ge 15 ']' 00:21:44.900 [2024-07-23 05:12:44.978015] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:44.900 05:12:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=10 00:21:44.900 05:12:44 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:45.834 05:12:45 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:45.834 mke2fs 1.46.5 (30-Dec-2021) 00:21:46.093 Discarding device blocks: 0/131072 done 00:21:46.093 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:46.093 Filesystem UUID: d1cf356a-3ebd-4e73-85f8-c4cf343d34c8 00:21:46.093 Superblock backups stored on blocks: 00:21:46.093 32768, 98304 00:21:46.093 00:21:46.093 Allocating group tables: 0/4 done 00:21:46.093 Writing inode tables: 0/4 done 00:21:46.093 Creating journal (4096 blocks): done 00:21:46.093 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:21:46.093 05:12:46 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 10 -ge 15 ']' 00:21:46.093 [2024-07-23 05:12:46.301840] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:46.093 05:12:46 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=11 00:21:46.093 05:12:46 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:47.471 05:12:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:47.471 mke2fs 1.46.5 (30-Dec-2021) 00:21:47.471 Discarding device blocks: 0/131072 done 00:21:47.471 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:47.471 Filesystem UUID: 96f11840-7b11-40e9-bc2f-c0d72bcf3d39 00:21:47.471 Superblock backups stored on blocks: 00:21:47.471 32768, 98304 00:21:47.471 00:21:47.471 Allocating group tables: 0/4 done 00:21:47.471 Writing inode tables: 0/4 done 00:21:47.471 Creating journal (4096 blocks): done 00:21:47.471 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:21:47.471 05:12:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 11 -ge 15 ']' 00:21:47.471 05:12:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=12 00:21:47.471 05:12:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:47.471 [2024-07-23 05:12:47.622095] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:48.409 05:12:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:48.667 mke2fs 1.46.5 (30-Dec-2021) 00:21:48.667 Discarding device blocks: 0/131072 done 00:21:48.667 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:48.667 Filesystem UUID: 02d580fa-72c2-4541-ab3a-b777cb9903ba 00:21:48.667 Superblock backups stored on blocks: 00:21:48.667 32768, 98304 00:21:48.667 00:21:48.667 Allocating group tables: 0/4 done 00:21:48.667 Writing inode tables: 0/4 done 00:21:48.926 Creating journal (4096 blocks): done 00:21:48.926 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:21:48.926 05:12:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 12 -ge 15 ']' 00:21:48.926 05:12:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=13 00:21:48.926 [2024-07-23 05:12:48.948393] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:48.926 05:12:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:49.860 05:12:49 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:49.860 mke2fs 1.46.5 (30-Dec-2021) 00:21:50.162 Discarding device blocks: 0/131072 done 00:21:50.162 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:50.162 Filesystem UUID: 3cc01e92-88f8-44ca-98fe-ef126e5a3e85 00:21:50.162 Superblock backups stored on blocks: 00:21:50.162 32768, 98304 00:21:50.162 00:21:50.162 Allocating group tables: 0/4 done 00:21:50.162 Writing inode tables: 0/4 done 00:21:50.162 Creating journal (4096 blocks): done 00:21:50.162 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:21:50.162 05:12:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 13 -ge 15 ']' 00:21:50.162 [2024-07-23 05:12:50.250814] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:50.162 05:12:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=14 00:21:50.162 05:12:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:51.097 05:12:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:51.097 mke2fs 1.46.5 (30-Dec-2021) 00:21:51.354 Discarding device blocks: 0/131072 done 00:21:51.354 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:51.354 Filesystem UUID: 9af3a9d1-8f12-49f9-afcb-3d832421ab59 00:21:51.354 Superblock backups stored on blocks: 00:21:51.354 32768, 98304 00:21:51.354 00:21:51.354 Allocating group tables: 0/4 done 00:21:51.354 Writing inode tables: 0/4 done 00:21:51.354 Creating journal (4096 blocks): done 00:21:51.612 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:21:51.612 05:12:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 14 -ge 15 ']' 00:21:51.612 05:12:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=15 00:21:51.612 05:12:51 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:51.612 [2024-07-23 05:12:51.574735] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:52.547 05:12:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:52.547 mke2fs 1.46.5 (30-Dec-2021) 00:21:52.804 Discarding device blocks: 0/131072 done 00:21:52.805 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:52.805 Filesystem UUID: d664dd0e-69d1-42e3-a139-a32dde19e543 00:21:52.805 Superblock backups stored on blocks: 00:21:52.805 32768, 98304 00:21:52.805 00:21:52.805 Allocating group tables: 0/4 done 00:21:52.805 Writing inode tables: 0/4 done 00:21:52.805 Creating journal (4096 blocks): done 00:21:52.805 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:21:52.805 mkfs failed as expected 00:21:52.805 Cleaning up iSCSI connection 00:21:52.805 05:12:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 15 -ge 15 ']' 00:21:52.805 [2024-07-23 05:12:52.893514] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:52.805 05:12:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # return 1 00:21:52.805 05:12:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@70 -- # echo 'mkfs failed as expected' 00:21:52.805 05:12:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@73 -- # iscsicleanup 00:21:52.805 05:12:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:21:52.805 05:12:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:21:52.805 Logging out of session [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:21:52.805 Logout of [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:21:52.805 05:12:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:21:52.805 05:12:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # rm -rf 00:21:52.805 05:12:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 clear failure 00:21:53.063 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2013-06.com.intel.ch.spdk:Target0 00:21:53.320 Error injection test done 00:21:53.320 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@76 -- # echo 'Error injection test done' 00:21:53.320 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # get_bdev_size Nvme0n1 00:21:53.320 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1378 -- # local bdev_name=Nvme0n1 00:21:53.320 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:53.320 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1380 -- # local bs 00:21:53.320 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1381 -- # local nb 00:21:53.320 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 00:21:53.576 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:53.576 { 00:21:53.576 "name": "Nvme0n1", 00:21:53.576 "aliases": [ 00:21:53.576 "fb09bd33-1a72-464c-a86a-82832191e9b7" 00:21:53.576 ], 00:21:53.576 "product_name": "NVMe disk", 00:21:53.576 "block_size": 4096, 00:21:53.576 "num_blocks": 1310720, 00:21:53.576 "uuid": "fb09bd33-1a72-464c-a86a-82832191e9b7", 00:21:53.576 "assigned_rate_limits": { 00:21:53.576 "rw_ios_per_sec": 0, 00:21:53.576 "rw_mbytes_per_sec": 0, 00:21:53.576 "r_mbytes_per_sec": 0, 00:21:53.576 "w_mbytes_per_sec": 0 00:21:53.576 }, 00:21:53.576 "claimed": false, 00:21:53.576 "zoned": false, 00:21:53.576 "supported_io_types": { 00:21:53.576 "read": true, 00:21:53.576 "write": true, 00:21:53.576 "unmap": true, 00:21:53.576 "flush": true, 00:21:53.576 "reset": true, 00:21:53.576 "nvme_admin": true, 00:21:53.576 "nvme_io": true, 00:21:53.576 "nvme_io_md": false, 00:21:53.576 "write_zeroes": true, 00:21:53.576 "zcopy": false, 00:21:53.576 "get_zone_info": false, 00:21:53.576 "zone_management": false, 00:21:53.576 "zone_append": false, 00:21:53.576 "compare": true, 00:21:53.576 "compare_and_write": false, 00:21:53.576 "abort": true, 00:21:53.576 "seek_hole": false, 00:21:53.576 "seek_data": false, 00:21:53.576 "copy": true, 00:21:53.576 "nvme_iov_md": false 00:21:53.576 }, 00:21:53.576 "driver_specific": { 00:21:53.576 "nvme": [ 00:21:53.576 { 00:21:53.576 "pci_address": "0000:00:10.0", 00:21:53.576 "trid": { 00:21:53.576 "trtype": "PCIe", 00:21:53.576 "traddr": "0000:00:10.0" 00:21:53.576 }, 00:21:53.576 "ctrlr_data": { 00:21:53.576 "cntlid": 0, 00:21:53.577 "vendor_id": "0x1b36", 00:21:53.577 "model_number": "QEMU NVMe Ctrl", 00:21:53.577 "serial_number": "12340", 00:21:53.577 "firmware_revision": "8.0.0", 00:21:53.577 "subnqn": "nqn.2019-08.org.qemu:12340", 00:21:53.577 "oacs": { 00:21:53.577 "security": 0, 00:21:53.577 "format": 1, 00:21:53.577 "firmware": 0, 00:21:53.577 "ns_manage": 1 00:21:53.577 }, 00:21:53.577 "multi_ctrlr": false, 00:21:53.577 "ana_reporting": false 00:21:53.577 }, 00:21:53.577 "vs": { 00:21:53.577 "nvme_version": "1.4" 00:21:53.577 }, 00:21:53.577 "ns_data": { 00:21:53.577 "id": 1, 00:21:53.577 "can_share": false 00:21:53.577 } 00:21:53.577 } 00:21:53.577 ], 00:21:53.577 "mp_policy": "active_passive" 00:21:53.577 } 00:21:53.577 } 00:21:53.577 ]' 00:21:53.577 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:53.835 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1383 -- # bs=4096 00:21:53.835 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:53.835 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:53.835 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:53.835 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1388 -- # echo 5120 00:21:53.835 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # bdev_size=5120 00:21:53.835 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@79 -- # split_size=2560 00:21:53.835 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@80 -- # split_size=2560 00:21:53.835 05:12:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create Nvme0n1 2 -s 2560 00:21:54.093 Nvme0n1p0 Nvme0n1p1 00:21:54.093 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias Nvme0n1p0:0 1:2 64 -d 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@84 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:21:54.352 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target1 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@85 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:21:54.352 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:21:54.352 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@86 -- # waitforiscsidevices 1 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:54.352 [2024-07-23 05:12:54.385414] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # grep 'Attached scsi disk' 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # iscsiadm -m session -P 3 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # awk '{print $4}' 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # head -n1 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # dev=sda 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@89 -- # waitforfile /dev/sda 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1265 -- # local i=0 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1276 -- # return 0 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@91 -- # make_filesystem ext4 /dev/sda 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@924 -- # local fstype=ext4 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local i=0 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local force 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@930 -- # force=-F 00:21:54.352 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:54.352 mke2fs 1.46.5 (30-Dec-2021) 00:21:54.353 Discarding device blocks: 0/655360 done 00:21:54.353 Creating filesystem with 655360 4k blocks and 163840 inodes 00:21:54.353 Filesystem UUID: dac2d6b3-7430-483a-af41-ba6472d8a146 00:21:54.353 Superblock backups stored on blocks: 00:21:54.353 32768, 98304, 163840, 229376, 294912 00:21:54.353 00:21:54.353 Allocating group tables: 0/20 done 00:21:54.353 Writing inode tables: 0/20 done 00:21:54.611 Creating journal (16384 blocks): done 00:21:54.611 Writing superblocks and filesystem accounting information: 0/20 done 00:21:54.611 00:21:54.611 [2024-07-23 05:12:54.795930] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.611 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@943 -- # return 0 00:21:54.611 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@92 -- # mkdir -p /mnt/sdadir 00:21:54.611 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@93 -- # mount -o sync /dev/sda /mnt/sdadir 00:21:54.612 05:12:54 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@95 -- # rsync -qav --exclude=.git '--exclude=*.o' /home/vagrant/spdk_repo/spdk/ /mnt/sdadir/spdk 00:23:16.084 05:14:04 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@97 -- # make -C /mnt/sdadir/spdk clean 00:23:16.084 make: Entering directory '/mnt/sdadir/spdk' 00:23:54.831 make[1]: Nothing to be done for 'clean'. 00:23:54.831 make: Leaving directory '/mnt/sdadir/spdk' 00:23:54.831 05:14:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # cd /mnt/sdadir/spdk 00:23:54.831 05:14:53 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # ./configure --disable-unit-tests --disable-tests 00:23:54.831 Using default SPDK env in /mnt/sdadir/spdk/lib/env_dpdk 00:23:54.831 Using default DPDK in /mnt/sdadir/spdk/dpdk/build 00:24:16.793 Configuring ISA-L (logfile: /mnt/sdadir/spdk/.spdk-isal.log)...done. 00:24:34.892 Configuring ISA-L-crypto (logfile: /mnt/sdadir/spdk/.spdk-isal-crypto.log)...done. 00:24:35.838 Creating mk/config.mk...done. 00:24:35.838 Creating mk/cc.flags.mk...done. 00:24:35.838 Type 'make' to build. 00:24:35.838 05:15:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@99 -- # make -C /mnt/sdadir/spdk -j 00:24:35.838 make: Entering directory '/mnt/sdadir/spdk' 00:24:36.096 make[1]: Nothing to be done for 'all'. 00:24:58.021 The Meson build system 00:24:58.021 Version: 1.3.1 00:24:58.021 Source dir: /mnt/sdadir/spdk/dpdk 00:24:58.021 Build dir: /mnt/sdadir/spdk/dpdk/build-tmp 00:24:58.021 Build type: native build 00:24:58.021 Program cat found: YES (/usr/bin/cat) 00:24:58.021 Project name: DPDK 00:24:58.021 Project version: 24.03.0 00:24:58.021 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:24:58.021 C linker for the host machine: cc ld.bfd 2.39-16 00:24:58.021 Host machine cpu family: x86_64 00:24:58.021 Host machine cpu: x86_64 00:24:58.021 Program pkg-config found: YES (/usr/bin/pkg-config) 00:24:58.021 Program check-symbols.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/check-symbols.sh) 00:24:58.021 Program options-ibverbs-static.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:24:58.021 Program python3 found: YES (/usr/bin/python3) 00:24:58.021 Program cat found: YES (/usr/bin/cat) 00:24:58.021 Compiler for C supports arguments -march=native: YES 00:24:58.021 Checking for size of "void *" : 8 00:24:58.021 Checking for size of "void *" : 8 (cached) 00:24:58.021 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:24:58.021 Library m found: YES 00:24:58.021 Library numa found: YES 00:24:58.021 Has header "numaif.h" : YES 00:24:58.021 Library fdt found: NO 00:24:58.021 Library execinfo found: NO 00:24:58.021 Has header "execinfo.h" : YES 00:24:58.021 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:24:58.021 Run-time dependency libarchive found: NO (tried pkgconfig) 00:24:58.021 Run-time dependency libbsd found: NO (tried pkgconfig) 00:24:58.021 Run-time dependency jansson found: NO (tried pkgconfig) 00:24:58.021 Run-time dependency openssl found: YES 3.0.9 00:24:58.021 Run-time dependency libpcap found: YES 1.10.4 00:24:58.021 Has header "pcap.h" with dependency libpcap: YES 00:24:58.021 Compiler for C supports arguments -Wcast-qual: YES 00:24:58.021 Compiler for C supports arguments -Wdeprecated: YES 00:24:58.021 Compiler for C supports arguments -Wformat: YES 00:24:58.021 Compiler for C supports arguments -Wformat-nonliteral: YES 00:24:58.021 Compiler for C supports arguments -Wformat-security: YES 00:24:58.021 Compiler for C supports arguments -Wmissing-declarations: YES 00:24:58.021 Compiler for C supports arguments -Wmissing-prototypes: YES 00:24:58.021 Compiler for C supports arguments -Wnested-externs: YES 00:24:58.021 Compiler for C supports arguments -Wold-style-definition: YES 00:24:58.021 Compiler for C supports arguments -Wpointer-arith: YES 00:24:58.021 Compiler for C supports arguments -Wsign-compare: YES 00:24:58.021 Compiler for C supports arguments -Wstrict-prototypes: YES 00:24:58.021 Compiler for C supports arguments -Wundef: YES 00:24:58.021 Compiler for C supports arguments -Wwrite-strings: YES 00:24:58.021 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:24:58.021 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:24:58.021 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:24:58.021 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:24:58.021 Program objdump found: YES (/usr/bin/objdump) 00:24:58.021 Compiler for C supports arguments -mavx512f: YES 00:24:58.021 Checking if "AVX512 checking" compiles: YES 00:24:58.021 Fetching value of define "__SSE4_2__" : 1 00:24:58.021 Fetching value of define "__AES__" : 1 00:24:58.021 Fetching value of define "__AVX__" : 1 00:24:58.021 Fetching value of define "__AVX2__" : 1 00:24:58.021 Fetching value of define "__AVX512BW__" : (undefined) 00:24:58.021 Fetching value of define "__AVX512CD__" : (undefined) 00:24:58.021 Fetching value of define "__AVX512DQ__" : (undefined) 00:24:58.021 Fetching value of define "__AVX512F__" : (undefined) 00:24:58.021 Fetching value of define "__AVX512VL__" : (undefined) 00:24:58.021 Fetching value of define "__PCLMUL__" : 1 00:24:58.021 Fetching value of define "__RDRND__" : 1 00:24:58.021 Fetching value of define "__RDSEED__" : 1 00:24:58.021 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:24:58.021 Fetching value of define "__znver1__" : (undefined) 00:24:58.021 Fetching value of define "__znver2__" : (undefined) 00:24:58.021 Fetching value of define "__znver3__" : (undefined) 00:24:58.021 Fetching value of define "__znver4__" : (undefined) 00:24:58.021 Compiler for C supports arguments -Wno-format-truncation: YES 00:24:58.021 Checking for function "getentropy" : NO 00:24:58.021 Fetching value of define "__PCLMUL__" : 1 (cached) 00:24:58.021 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:24:58.021 Compiler for C supports arguments -mpclmul: YES 00:24:58.021 Compiler for C supports arguments -maes: YES 00:24:58.021 Compiler for C supports arguments -mavx512f: YES (cached) 00:24:58.021 Compiler for C supports arguments -mavx512bw: YES 00:24:58.021 Compiler for C supports arguments -mavx512dq: YES 00:24:58.021 Compiler for C supports arguments -mavx512vl: YES 00:24:58.021 Compiler for C supports arguments -mvpclmulqdq: YES 00:24:58.021 Compiler for C supports arguments -mavx2: YES 00:24:58.021 Compiler for C supports arguments -mavx: YES 00:24:58.021 Compiler for C supports arguments -Wno-cast-qual: YES 00:24:58.021 Has header "linux/userfaultfd.h" : YES 00:24:58.021 Has header "linux/vduse.h" : YES 00:24:58.021 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:24:58.021 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:24:58.021 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:24:58.021 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:24:58.021 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:24:58.021 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:24:58.021 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:24:58.021 Program doxygen found: YES (/usr/bin/doxygen) 00:24:58.021 Configuring doxy-api-html.conf using configuration 00:24:58.021 Configuring doxy-api-man.conf using configuration 00:24:58.021 Program mandb found: YES (/usr/bin/mandb) 00:24:58.021 Program sphinx-build found: NO 00:24:58.021 Configuring rte_build_config.h using configuration 00:24:58.021 Message: 00:24:58.021 ================= 00:24:58.021 Applications Enabled 00:24:58.021 ================= 00:24:58.021 00:24:58.021 apps: 00:24:58.021 00:24:58.021 00:24:58.021 Message: 00:24:58.021 ================= 00:24:58.021 Libraries Enabled 00:24:58.021 ================= 00:24:58.021 00:24:58.021 libs: 00:24:58.021 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:24:58.021 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:24:58.021 cryptodev, dmadev, power, reorder, security, vhost, 00:24:58.021 00:24:58.021 Message: 00:24:58.021 =============== 00:24:58.021 Drivers Enabled 00:24:58.021 =============== 00:24:58.021 00:24:58.021 common: 00:24:58.022 00:24:58.022 bus: 00:24:58.022 pci, vdev, 00:24:58.022 mempool: 00:24:58.022 ring, 00:24:58.022 dma: 00:24:58.022 00:24:58.022 net: 00:24:58.022 00:24:58.022 crypto: 00:24:58.022 00:24:58.022 compress: 00:24:58.022 00:24:58.022 vdpa: 00:24:58.022 00:24:58.022 00:24:58.022 Message: 00:24:58.022 ================= 00:24:58.022 Content Skipped 00:24:58.022 ================= 00:24:58.022 00:24:58.022 apps: 00:24:58.022 dumpcap: explicitly disabled via build config 00:24:58.022 graph: explicitly disabled via build config 00:24:58.022 pdump: explicitly disabled via build config 00:24:58.022 proc-info: explicitly disabled via build config 00:24:58.022 test-acl: explicitly disabled via build config 00:24:58.022 test-bbdev: explicitly disabled via build config 00:24:58.022 test-cmdline: explicitly disabled via build config 00:24:58.022 test-compress-perf: explicitly disabled via build config 00:24:58.022 test-crypto-perf: explicitly disabled via build config 00:24:58.022 test-dma-perf: explicitly disabled via build config 00:24:58.022 test-eventdev: explicitly disabled via build config 00:24:58.022 test-fib: explicitly disabled via build config 00:24:58.022 test-flow-perf: explicitly disabled via build config 00:24:58.022 test-gpudev: explicitly disabled via build config 00:24:58.022 test-mldev: explicitly disabled via build config 00:24:58.022 test-pipeline: explicitly disabled via build config 00:24:58.022 test-pmd: explicitly disabled via build config 00:24:58.022 test-regex: explicitly disabled via build config 00:24:58.022 test-sad: explicitly disabled via build config 00:24:58.022 test-security-perf: explicitly disabled via build config 00:24:58.022 00:24:58.022 libs: 00:24:58.022 argparse: explicitly disabled via build config 00:24:58.022 metrics: explicitly disabled via build config 00:24:58.022 acl: explicitly disabled via build config 00:24:58.022 bbdev: explicitly disabled via build config 00:24:58.022 bitratestats: explicitly disabled via build config 00:24:58.022 bpf: explicitly disabled via build config 00:24:58.022 cfgfile: explicitly disabled via build config 00:24:58.022 distributor: explicitly disabled via build config 00:24:58.022 efd: explicitly disabled via build config 00:24:58.022 eventdev: explicitly disabled via build config 00:24:58.022 dispatcher: explicitly disabled via build config 00:24:58.022 gpudev: explicitly disabled via build config 00:24:58.022 gro: explicitly disabled via build config 00:24:58.022 gso: explicitly disabled via build config 00:24:58.022 ip_frag: explicitly disabled via build config 00:24:58.022 jobstats: explicitly disabled via build config 00:24:58.022 latencystats: explicitly disabled via build config 00:24:58.022 lpm: explicitly disabled via build config 00:24:58.022 member: explicitly disabled via build config 00:24:58.022 pcapng: explicitly disabled via build config 00:24:58.022 rawdev: explicitly disabled via build config 00:24:58.022 regexdev: explicitly disabled via build config 00:24:58.022 mldev: explicitly disabled via build config 00:24:58.022 rib: explicitly disabled via build config 00:24:58.022 sched: explicitly disabled via build config 00:24:58.022 stack: explicitly disabled via build config 00:24:58.022 ipsec: explicitly disabled via build config 00:24:58.022 pdcp: explicitly disabled via build config 00:24:58.022 fib: explicitly disabled via build config 00:24:58.022 port: explicitly disabled via build config 00:24:58.022 pdump: explicitly disabled via build config 00:24:58.022 table: explicitly disabled via build config 00:24:58.022 pipeline: explicitly disabled via build config 00:24:58.022 graph: explicitly disabled via build config 00:24:58.022 node: explicitly disabled via build config 00:24:58.022 00:24:58.022 drivers: 00:24:58.022 common/cpt: not in enabled drivers build config 00:24:58.022 common/dpaax: not in enabled drivers build config 00:24:58.022 common/iavf: not in enabled drivers build config 00:24:58.022 common/idpf: not in enabled drivers build config 00:24:58.022 common/ionic: not in enabled drivers build config 00:24:58.022 common/mvep: not in enabled drivers build config 00:24:58.022 common/octeontx: not in enabled drivers build config 00:24:58.022 bus/auxiliary: not in enabled drivers build config 00:24:58.022 bus/cdx: not in enabled drivers build config 00:24:58.022 bus/dpaa: not in enabled drivers build config 00:24:58.022 bus/fslmc: not in enabled drivers build config 00:24:58.022 bus/ifpga: not in enabled drivers build config 00:24:58.022 bus/platform: not in enabled drivers build config 00:24:58.022 bus/uacce: not in enabled drivers build config 00:24:58.022 bus/vmbus: not in enabled drivers build config 00:24:58.022 common/cnxk: not in enabled drivers build config 00:24:58.022 common/mlx5: not in enabled drivers build config 00:24:58.022 common/nfp: not in enabled drivers build config 00:24:58.022 common/nitrox: not in enabled drivers build config 00:24:58.022 common/qat: not in enabled drivers build config 00:24:58.022 common/sfc_efx: not in enabled drivers build config 00:24:58.022 mempool/bucket: not in enabled drivers build config 00:24:58.022 mempool/cnxk: not in enabled drivers build config 00:24:58.022 mempool/dpaa: not in enabled drivers build config 00:24:58.022 mempool/dpaa2: not in enabled drivers build config 00:24:58.022 mempool/octeontx: not in enabled drivers build config 00:24:58.022 mempool/stack: not in enabled drivers build config 00:24:58.022 dma/cnxk: not in enabled drivers build config 00:24:58.022 dma/dpaa: not in enabled drivers build config 00:24:58.022 dma/dpaa2: not in enabled drivers build config 00:24:58.022 dma/hisilicon: not in enabled drivers build config 00:24:58.022 dma/idxd: not in enabled drivers build config 00:24:58.022 dma/ioat: not in enabled drivers build config 00:24:58.022 dma/skeleton: not in enabled drivers build config 00:24:58.022 net/af_packet: not in enabled drivers build config 00:24:58.022 net/af_xdp: not in enabled drivers build config 00:24:58.022 net/ark: not in enabled drivers build config 00:24:58.022 net/atlantic: not in enabled drivers build config 00:24:58.022 net/avp: not in enabled drivers build config 00:24:58.022 net/axgbe: not in enabled drivers build config 00:24:58.022 net/bnx2x: not in enabled drivers build config 00:24:58.022 net/bnxt: not in enabled drivers build config 00:24:58.022 net/bonding: not in enabled drivers build config 00:24:58.022 net/cnxk: not in enabled drivers build config 00:24:58.022 net/cpfl: not in enabled drivers build config 00:24:58.022 net/cxgbe: not in enabled drivers build config 00:24:58.022 net/dpaa: not in enabled drivers build config 00:24:58.022 net/dpaa2: not in enabled drivers build config 00:24:58.022 net/e1000: not in enabled drivers build config 00:24:58.022 net/ena: not in enabled drivers build config 00:24:58.022 net/enetc: not in enabled drivers build config 00:24:58.022 net/enetfec: not in enabled drivers build config 00:24:58.022 net/enic: not in enabled drivers build config 00:24:58.022 net/failsafe: not in enabled drivers build config 00:24:58.022 net/fm10k: not in enabled drivers build config 00:24:58.022 net/gve: not in enabled drivers build config 00:24:58.022 net/hinic: not in enabled drivers build config 00:24:58.022 net/hns3: not in enabled drivers build config 00:24:58.022 net/i40e: not in enabled drivers build config 00:24:58.022 net/iavf: not in enabled drivers build config 00:24:58.022 net/ice: not in enabled drivers build config 00:24:58.022 net/idpf: not in enabled drivers build config 00:24:58.022 net/igc: not in enabled drivers build config 00:24:58.022 net/ionic: not in enabled drivers build config 00:24:58.022 net/ipn3ke: not in enabled drivers build config 00:24:58.022 net/ixgbe: not in enabled drivers build config 00:24:58.022 net/mana: not in enabled drivers build config 00:24:58.022 net/memif: not in enabled drivers build config 00:24:58.022 net/mlx4: not in enabled drivers build config 00:24:58.022 net/mlx5: not in enabled drivers build config 00:24:58.022 net/mvneta: not in enabled drivers build config 00:24:58.022 net/mvpp2: not in enabled drivers build config 00:24:58.022 net/netvsc: not in enabled drivers build config 00:24:58.022 net/nfb: not in enabled drivers build config 00:24:58.022 net/nfp: not in enabled drivers build config 00:24:58.022 net/ngbe: not in enabled drivers build config 00:24:58.022 net/null: not in enabled drivers build config 00:24:58.022 net/octeontx: not in enabled drivers build config 00:24:58.022 net/octeon_ep: not in enabled drivers build config 00:24:58.022 net/pcap: not in enabled drivers build config 00:24:58.022 net/pfe: not in enabled drivers build config 00:24:58.022 net/qede: not in enabled drivers build config 00:24:58.022 net/ring: not in enabled drivers build config 00:24:58.022 net/sfc: not in enabled drivers build config 00:24:58.022 net/softnic: not in enabled drivers build config 00:24:58.022 net/tap: not in enabled drivers build config 00:24:58.022 net/thunderx: not in enabled drivers build config 00:24:58.022 net/txgbe: not in enabled drivers build config 00:24:58.022 net/vdev_netvsc: not in enabled drivers build config 00:24:58.022 net/vhost: not in enabled drivers build config 00:24:58.022 net/virtio: not in enabled drivers build config 00:24:58.022 net/vmxnet3: not in enabled drivers build config 00:24:58.022 raw/*: missing internal dependency, "rawdev" 00:24:58.022 crypto/armv8: not in enabled drivers build config 00:24:58.022 crypto/bcmfs: not in enabled drivers build config 00:24:58.022 crypto/caam_jr: not in enabled drivers build config 00:24:58.022 crypto/ccp: not in enabled drivers build config 00:24:58.022 crypto/cnxk: not in enabled drivers build config 00:24:58.022 crypto/dpaa_sec: not in enabled drivers build config 00:24:58.022 crypto/dpaa2_sec: not in enabled drivers build config 00:24:58.022 crypto/ipsec_mb: not in enabled drivers build config 00:24:58.022 crypto/mlx5: not in enabled drivers build config 00:24:58.022 crypto/mvsam: not in enabled drivers build config 00:24:58.022 crypto/nitrox: not in enabled drivers build config 00:24:58.023 crypto/null: not in enabled drivers build config 00:24:58.023 crypto/octeontx: not in enabled drivers build config 00:24:58.023 crypto/openssl: not in enabled drivers build config 00:24:58.023 crypto/scheduler: not in enabled drivers build config 00:24:58.023 crypto/uadk: not in enabled drivers build config 00:24:58.023 crypto/virtio: not in enabled drivers build config 00:24:58.023 compress/isal: not in enabled drivers build config 00:24:58.023 compress/mlx5: not in enabled drivers build config 00:24:58.023 compress/nitrox: not in enabled drivers build config 00:24:58.023 compress/octeontx: not in enabled drivers build config 00:24:58.023 compress/zlib: not in enabled drivers build config 00:24:58.023 regex/*: missing internal dependency, "regexdev" 00:24:58.023 ml/*: missing internal dependency, "mldev" 00:24:58.023 vdpa/ifc: not in enabled drivers build config 00:24:58.023 vdpa/mlx5: not in enabled drivers build config 00:24:58.023 vdpa/nfp: not in enabled drivers build config 00:24:58.023 vdpa/sfc: not in enabled drivers build config 00:24:58.023 event/*: missing internal dependency, "eventdev" 00:24:58.023 baseband/*: missing internal dependency, "bbdev" 00:24:58.023 gpu/*: missing internal dependency, "gpudev" 00:24:58.023 00:24:58.023 00:24:58.023 Build targets in project: 61 00:24:58.023 00:24:58.023 DPDK 24.03.0 00:24:58.023 00:24:58.023 User defined options 00:24:58.023 default_library : static 00:24:58.023 libdir : lib 00:24:58.023 prefix : /mnt/sdadir/spdk/dpdk/build 00:24:58.023 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Wno-error 00:24:58.023 c_link_args : 00:24:58.023 cpu_instruction_set: native 00:24:58.023 disable_apps : dumpcap,test-pipeline,test-bbdev,test-gpudev,test-security-perf,test-acl,test,test-compress-perf,test-crypto-perf,test-cmdline,test-pmd,test-dma-perf,test-eventdev,test-flow-perf,test-fib,pdump,test-mldev,graph,test-regex,test-sad,proc-info 00:24:58.023 disable_libs : stack,dispatcher,efd,pdcp,mldev,pipeline,rawdev,ipsec,bitratestats,port,lpm,regexdev,jobstats,eventdev,sched,gpudev,ip_frag,member,gro,acl,table,metrics,bbdev,pdump,distributor,rib,graph,fib,pcapng,gso,argparse,latencystats,node,cfgfile,bpf 00:24:58.023 enable_docs : false 00:24:58.023 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:24:58.023 enable_kmods : false 00:24:58.023 max_lcores : 128 00:24:58.023 tests : false 00:24:58.023 00:24:58.023 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:24:58.590 ninja: Entering directory `/mnt/sdadir/spdk/dpdk/build-tmp' 00:24:58.590 [1/244] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:24:58.849 [2/244] Compiling C object lib/librte_log.a.p/log_log.c.o 00:24:59.107 [3/244] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:24:59.107 [4/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:24:59.107 [5/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:24:59.107 [6/244] Linking static target lib/librte_log.a 00:24:59.107 [7/244] Linking static target lib/librte_kvargs.a 00:24:59.365 [8/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:24:59.623 [9/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:24:59.623 [10/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:24:59.623 [11/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:24:59.623 [12/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:24:59.623 [13/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:24:59.623 [14/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:24:59.623 [15/244] Linking target lib/librte_log.so.24.1 00:24:59.881 [16/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:25:00.140 [17/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:25:00.140 [18/244] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:25:00.140 [19/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:25:00.140 [20/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:25:00.398 [21/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:25:00.398 [22/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:25:00.398 [23/244] Linking target lib/librte_kvargs.so.24.1 00:25:00.398 [24/244] Linking static target lib/librte_telemetry.a 00:25:00.398 [25/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:25:00.398 [26/244] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:25:00.398 [27/244] Linking target lib/librte_telemetry.so.24.1 00:25:00.660 [28/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:25:00.660 [29/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:25:00.660 [30/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:25:00.660 [31/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:25:00.660 [32/244] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:25:00.660 [33/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:25:00.923 [34/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:25:00.923 [35/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:25:00.923 [36/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:25:00.923 [37/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:25:01.189 [38/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:25:01.189 [39/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:25:01.189 [40/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:25:01.189 [41/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:25:01.189 [42/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:25:01.448 [43/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:25:01.448 [44/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:25:01.706 [45/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:25:01.706 [46/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:25:01.706 [47/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:25:01.706 [48/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:25:01.965 [49/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:25:02.224 [50/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:25:02.224 [51/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:25:02.224 [52/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:25:02.224 [53/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:25:02.224 [54/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:25:02.224 [55/244] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:25:02.482 [56/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:25:02.482 [57/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:25:02.482 [58/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:25:02.482 [59/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:25:02.482 [60/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:25:02.482 [61/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:25:02.741 [62/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:25:02.999 [63/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:25:02.999 [64/244] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:25:02.999 [65/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:25:02.999 [66/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:25:03.258 [67/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:25:03.258 [68/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:25:03.258 [69/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:25:03.517 [70/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:25:03.517 [71/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:25:03.517 [72/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:25:03.517 [73/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:25:03.517 [74/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:25:03.517 [75/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:25:03.775 [76/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:25:03.775 [77/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:25:03.775 [78/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:25:04.034 [79/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:25:04.034 [80/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:25:04.291 [81/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:25:04.291 [82/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:25:04.291 [83/244] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:25:04.549 [84/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:25:04.549 [85/244] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:25:04.549 [86/244] Linking static target lib/librte_ring.a 00:25:04.549 [87/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:25:04.808 [88/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:25:04.808 [89/244] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:25:04.808 [90/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:25:04.808 [91/244] Linking static target lib/net/libnet_crc_avx512_lib.a 00:25:04.808 [92/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:25:04.808 [93/244] Linking static target lib/librte_mempool.a 00:25:04.808 [94/244] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:25:05.067 [95/244] Linking static target lib/librte_rcu.a 00:25:05.325 [96/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:25:05.325 [97/244] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:25:05.325 [98/244] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:25:05.325 [99/244] Linking static target lib/librte_eal.a 00:25:05.325 [100/244] Linking static target lib/librte_mbuf.a 00:25:05.325 [101/244] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:25:05.584 [102/244] Linking target lib/librte_eal.so.24.1 00:25:05.584 [103/244] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:25:05.584 [104/244] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:25:05.584 [105/244] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:25:05.584 [106/244] Linking static target lib/librte_meter.a 00:25:05.584 [107/244] Linking static target lib/librte_net.a 00:25:05.584 [108/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:25:05.842 [109/244] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:25:05.842 [110/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:25:05.842 [111/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:25:05.842 [112/244] Linking target lib/librte_ring.so.24.1 00:25:05.842 [113/244] Linking target lib/librte_meter.so.24.1 00:25:06.100 [114/244] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:25:06.100 [115/244] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:25:06.100 [116/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:25:06.374 [117/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:25:06.374 [118/244] Linking target lib/librte_rcu.so.24.1 00:25:06.374 [119/244] Linking target lib/librte_mempool.so.24.1 00:25:06.642 [120/244] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:25:06.642 [121/244] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:25:06.899 [122/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:25:06.899 [123/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:25:06.899 [124/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:25:06.899 [125/244] Linking target lib/librte_mbuf.so.24.1 00:25:07.157 [126/244] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:25:07.157 [127/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:25:07.157 [128/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:25:07.157 [129/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:25:07.158 [130/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:25:07.158 [131/244] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:25:07.158 [132/244] Linking static target lib/librte_pci.a 00:25:07.158 [133/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:25:07.158 [134/244] Linking target lib/librte_pci.so.24.1 00:25:07.416 [135/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:25:07.416 [136/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:25:07.416 [137/244] Linking target lib/librte_net.so.24.1 00:25:07.416 [138/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:25:07.416 [139/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:25:07.416 [140/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:25:07.416 [141/244] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:25:07.416 [142/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:25:07.674 [143/244] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:25:07.674 [144/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:25:07.674 [145/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:25:07.674 [146/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:25:07.674 [147/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:25:07.674 [148/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:25:07.674 [149/244] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:25:07.674 [150/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:25:07.674 [151/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:25:08.240 [152/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:25:08.240 [153/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:25:08.240 [154/244] Linking static target lib/librte_cmdline.a 00:25:08.240 [155/244] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:25:08.497 [156/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:25:08.497 [157/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:25:08.497 [158/244] Linking target lib/librte_cmdline.so.24.1 00:25:08.754 [159/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:25:08.754 [160/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:25:08.754 [161/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:25:08.754 [162/244] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:25:09.013 [163/244] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:25:09.013 [164/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:25:09.013 [165/244] Linking static target lib/librte_compressdev.a 00:25:09.013 [166/244] Linking static target lib/librte_timer.a 00:25:09.013 [167/244] Linking target lib/librte_compressdev.so.24.1 00:25:09.013 [168/244] Linking target lib/librte_timer.so.24.1 00:25:09.013 [169/244] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:25:09.271 [170/244] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:25:09.271 [171/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:25:09.271 [172/244] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:25:09.271 [173/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:25:09.837 [174/244] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:25:09.837 [175/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:25:09.837 [176/244] Linking static target lib/librte_dmadev.a 00:25:09.837 [177/244] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:25:10.095 [178/244] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:25:10.095 [179/244] Linking target lib/librte_dmadev.so.24.1 00:25:10.095 [180/244] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:25:10.095 [181/244] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:25:10.095 [182/244] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:25:10.095 [183/244] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:25:10.353 [184/244] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:25:10.353 [185/244] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:25:10.353 [186/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:25:10.353 [187/244] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:25:10.611 [188/244] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:25:10.869 [189/244] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:25:10.869 [190/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:25:10.869 [191/244] Linking static target lib/librte_hash.a 00:25:10.869 [192/244] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:25:10.869 [193/244] Linking target lib/librte_ethdev.so.24.1 00:25:11.127 [194/244] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:25:11.127 [195/244] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:25:11.127 [196/244] Linking target lib/librte_hash.so.24.1 00:25:11.127 [197/244] Linking target lib/librte_cryptodev.so.24.1 00:25:11.127 [198/244] Linking static target lib/librte_cryptodev.a 00:25:11.127 [199/244] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:25:11.127 [200/244] Linking static target lib/librte_reorder.a 00:25:11.127 [201/244] Linking static target lib/librte_security.a 00:25:11.127 [202/244] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:25:11.127 [203/244] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:25:11.127 [204/244] Linking target lib/librte_reorder.so.24.1 00:25:11.385 [205/244] Linking static target lib/librte_power.a 00:25:11.385 [206/244] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:25:11.385 [207/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:25:11.385 [208/244] Linking target lib/librte_power.so.24.1 00:25:11.385 [209/244] Linking static target lib/librte_ethdev.a 00:25:11.385 [210/244] Linking target lib/librte_security.so.24.1 00:25:12.320 [211/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:25:12.320 [212/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:25:12.320 [213/244] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:25:12.320 [214/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:25:12.320 [215/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:25:12.320 [216/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:25:12.320 [217/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:25:12.320 [218/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:25:12.320 [219/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:25:12.578 [220/244] Linking static target drivers/libtmp_rte_bus_vdev.a 00:25:12.578 [221/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:25:12.837 [222/244] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:25:12.837 [223/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:25:12.837 [224/244] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:25:12.837 [225/244] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:25:12.837 [226/244] Linking static target drivers/libtmp_rte_bus_pci.a 00:25:12.837 [227/244] Linking static target drivers/librte_bus_vdev.a 00:25:13.095 [228/244] Linking target drivers/librte_bus_vdev.so.24.1 00:25:13.095 [229/244] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:25:13.353 [230/244] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:25:13.353 [231/244] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:25:13.353 [232/244] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:25:13.353 [233/244] Linking static target drivers/libtmp_rte_mempool_ring.a 00:25:13.353 [234/244] Linking static target drivers/librte_bus_pci.a 00:25:13.611 [235/244] Linking target drivers/librte_bus_pci.so.24.1 00:25:13.611 [236/244] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:25:13.869 [237/244] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:25:13.869 [238/244] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:25:13.869 [239/244] Linking static target drivers/librte_mempool_ring.a 00:25:13.869 [240/244] Linking target drivers/librte_mempool_ring.so.24.1 00:25:15.832 [241/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:25:23.948 [242/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:25:24.515 [243/244] Linking target lib/librte_vhost.so.24.1 00:25:24.773 [244/244] Linking static target lib/librte_vhost.a 00:25:24.773 INFO: autodetecting backend as ninja 00:25:24.773 INFO: calculating backend command to run: /usr/local/bin/ninja -C /mnt/sdadir/spdk/dpdk/build-tmp 00:25:31.351 CC lib/ut_mock/mock.o 00:25:31.351 CC lib/log/log.o 00:25:31.351 CC lib/log/log_flags.o 00:25:31.351 CC lib/log/log_deprecated.o 00:25:31.351 LIB libspdk_ut_mock.a 00:25:31.351 LIB libspdk_log.a 00:25:31.351 CC lib/ioat/ioat.o 00:25:31.351 CC lib/util/base64.o 00:25:31.351 CC lib/util/bit_array.o 00:25:31.351 CC lib/util/cpuset.o 00:25:31.351 CC lib/util/crc16.o 00:25:31.351 CC lib/util/crc32.o 00:25:31.351 CC lib/dma/dma.o 00:25:31.351 CC lib/util/crc32c.o 00:25:31.351 CC lib/util/crc32_ieee.o 00:25:31.351 CC lib/util/crc64.o 00:25:31.351 CXX lib/trace_parser/trace.o 00:25:31.351 CC lib/util/dif.o 00:25:31.351 CC lib/util/fd_group.o 00:25:31.351 CC lib/util/fd.o 00:25:31.351 CC lib/util/file.o 00:25:31.351 CC lib/util/hexlify.o 00:25:31.351 CC lib/util/iov.o 00:25:31.351 CC lib/util/math.o 00:25:31.351 CC lib/util/net.o 00:25:31.351 CC lib/util/pipe.o 00:25:31.351 CC lib/util/strerror_tls.o 00:25:31.351 CC lib/util/string.o 00:25:31.351 CC lib/util/uuid.o 00:25:31.351 CC lib/util/xor.o 00:25:31.351 CC lib/util/zipf.o 00:25:31.351 CC lib/vfio_user/host/vfio_user_pci.o 00:25:31.351 CC lib/vfio_user/host/vfio_user.o 00:25:31.918 LIB libspdk_dma.a 00:25:31.918 LIB libspdk_vfio_user.a 00:25:31.918 LIB libspdk_ioat.a 00:25:32.509 LIB libspdk_trace_parser.a 00:25:32.777 LIB libspdk_util.a 00:25:33.715 CC lib/env_dpdk/env.o 00:25:33.715 CC lib/env_dpdk/memory.o 00:25:33.715 CC lib/json/json_parse.o 00:25:33.715 CC lib/env_dpdk/init.o 00:25:33.715 CC lib/env_dpdk/pci.o 00:25:33.715 CC lib/env_dpdk/threads.o 00:25:33.715 CC lib/json/json_util.o 00:25:33.715 CC lib/json/json_write.o 00:25:33.715 CC lib/conf/conf.o 00:25:33.715 CC lib/env_dpdk/pci_virtio.o 00:25:33.715 CC lib/env_dpdk/pci_vmd.o 00:25:33.715 CC lib/env_dpdk/pci_ioat.o 00:25:33.715 CC lib/vmd/vmd.o 00:25:33.715 CC lib/vmd/led.o 00:25:33.715 CC lib/env_dpdk/pci_event.o 00:25:33.715 CC lib/env_dpdk/pci_idxd.o 00:25:33.715 CC lib/env_dpdk/sigbus_handler.o 00:25:33.715 CC lib/env_dpdk/pci_dpdk.o 00:25:33.715 CC lib/env_dpdk/pci_dpdk_2207.o 00:25:33.715 CC lib/env_dpdk/pci_dpdk_2211.o 00:25:34.281 LIB libspdk_conf.a 00:25:34.539 LIB libspdk_json.a 00:25:34.539 LIB libspdk_vmd.a 00:25:35.106 CC lib/jsonrpc/jsonrpc_server.o 00:25:35.106 CC lib/jsonrpc/jsonrpc_client.o 00:25:35.106 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:25:35.106 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:25:35.672 LIB libspdk_jsonrpc.a 00:25:35.672 LIB libspdk_env_dpdk.a 00:25:36.238 CC lib/rpc/rpc.o 00:25:36.497 LIB libspdk_rpc.a 00:25:37.063 CC lib/keyring/keyring.o 00:25:37.063 CC lib/keyring/keyring_rpc.o 00:25:37.063 CC lib/notify/notify.o 00:25:37.063 CC lib/notify/notify_rpc.o 00:25:37.063 CC lib/trace/trace.o 00:25:37.063 CC lib/trace/trace_flags.o 00:25:37.063 CC lib/trace/trace_rpc.o 00:25:37.320 LIB libspdk_notify.a 00:25:37.320 LIB libspdk_keyring.a 00:25:37.320 LIB libspdk_trace.a 00:25:37.884 CC lib/sock/sock.o 00:25:37.884 CC lib/sock/sock_rpc.o 00:25:37.884 CC lib/thread/iobuf.o 00:25:37.884 CC lib/thread/thread.o 00:25:38.447 LIB libspdk_sock.a 00:25:39.013 CC lib/nvme/nvme_ctrlr.o 00:25:39.013 CC lib/nvme/nvme_ctrlr_cmd.o 00:25:39.013 CC lib/nvme/nvme_ns.o 00:25:39.013 CC lib/nvme/nvme_ns_cmd.o 00:25:39.013 CC lib/nvme/nvme_pcie_common.o 00:25:39.013 CC lib/nvme/nvme_fabric.o 00:25:39.013 CC lib/nvme/nvme_pcie.o 00:25:39.013 CC lib/nvme/nvme_qpair.o 00:25:39.013 CC lib/nvme/nvme.o 00:25:39.013 CC lib/nvme/nvme_quirks.o 00:25:39.013 CC lib/nvme/nvme_transport.o 00:25:39.013 CC lib/nvme/nvme_discovery.o 00:25:39.013 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:25:39.013 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:25:39.013 CC lib/nvme/nvme_tcp.o 00:25:39.013 CC lib/nvme/nvme_io_msg.o 00:25:39.013 CC lib/nvme/nvme_opal.o 00:25:39.013 CC lib/nvme/nvme_poll_group.o 00:25:39.013 CC lib/nvme/nvme_zns.o 00:25:39.013 CC lib/nvme/nvme_stubs.o 00:25:39.013 CC lib/nvme/nvme_auth.o 00:25:39.013 CC lib/nvme/nvme_cuse.o 00:25:39.943 LIB libspdk_thread.a 00:25:40.882 CC lib/init/json_config.o 00:25:40.882 CC lib/blob/blobstore.o 00:25:40.882 CC lib/init/subsystem.o 00:25:40.882 CC lib/blob/request.o 00:25:40.882 CC lib/init/subsystem_rpc.o 00:25:40.882 CC lib/init/rpc.o 00:25:40.882 CC lib/blob/zeroes.o 00:25:40.882 CC lib/accel/accel.o 00:25:40.882 CC lib/blob/blob_bs_dev.o 00:25:40.882 CC lib/accel/accel_rpc.o 00:25:40.882 CC lib/accel/accel_sw.o 00:25:40.882 CC lib/virtio/virtio.o 00:25:40.882 CC lib/virtio/virtio_vhost_user.o 00:25:40.882 CC lib/virtio/virtio_vfio_user.o 00:25:40.882 CC lib/virtio/virtio_pci.o 00:25:41.839 LIB libspdk_init.a 00:25:41.839 LIB libspdk_virtio.a 00:25:42.097 CC lib/event/app.o 00:25:42.097 CC lib/event/reactor.o 00:25:42.097 CC lib/event/log_rpc.o 00:25:42.097 CC lib/event/app_rpc.o 00:25:42.097 CC lib/event/scheduler_static.o 00:25:42.661 LIB libspdk_event.a 00:25:42.919 LIB libspdk_accel.a 00:25:43.178 LIB libspdk_nvme.a 00:25:43.745 CC lib/bdev/bdev.o 00:25:43.745 CC lib/bdev/bdev_rpc.o 00:25:43.745 CC lib/bdev/bdev_zone.o 00:25:43.745 CC lib/bdev/part.o 00:25:43.745 CC lib/bdev/scsi_nvme.o 00:25:45.118 LIB libspdk_blob.a 00:25:46.053 CC lib/blobfs/blobfs.o 00:25:46.053 CC lib/blobfs/tree.o 00:25:46.053 CC lib/lvol/lvol.o 00:25:46.989 LIB libspdk_bdev.a 00:25:47.247 LIB libspdk_blobfs.a 00:25:47.247 LIB libspdk_lvol.a 00:25:48.183 CC lib/nvmf/ctrlr_discovery.o 00:25:48.183 CC lib/nvmf/ctrlr.o 00:25:48.183 CC lib/nvmf/ctrlr_bdev.o 00:25:48.183 CC lib/nvmf/subsystem.o 00:25:48.183 CC lib/nvmf/nvmf.o 00:25:48.183 CC lib/scsi/dev.o 00:25:48.183 CC lib/nvmf/nvmf_rpc.o 00:25:48.183 CC lib/nvmf/transport.o 00:25:48.183 CC lib/nvmf/tcp.o 00:25:48.183 CC lib/scsi/lun.o 00:25:48.183 CC lib/scsi/port.o 00:25:48.183 CC lib/scsi/scsi.o 00:25:48.183 CC lib/nvmf/stubs.o 00:25:48.183 CC lib/nvmf/mdns_server.o 00:25:48.183 CC lib/nbd/nbd.o 00:25:48.183 CC lib/scsi/scsi_bdev.o 00:25:48.183 CC lib/nvmf/auth.o 00:25:48.183 CC lib/nbd/nbd_rpc.o 00:25:48.183 CC lib/ftl/ftl_core.o 00:25:48.183 CC lib/ftl/ftl_init.o 00:25:48.183 CC lib/scsi/scsi_pr.o 00:25:48.183 CC lib/ftl/ftl_layout.o 00:25:48.183 CC lib/ftl/ftl_debug.o 00:25:48.183 CC lib/scsi/scsi_rpc.o 00:25:48.183 CC lib/ftl/ftl_io.o 00:25:48.183 CC lib/scsi/task.o 00:25:48.183 CC lib/ftl/ftl_sb.o 00:25:48.183 CC lib/ftl/ftl_l2p.o 00:25:48.183 CC lib/ftl/ftl_l2p_flat.o 00:25:48.183 CC lib/ftl/ftl_nv_cache.o 00:25:48.183 CC lib/ftl/ftl_band.o 00:25:48.183 CC lib/ftl/ftl_band_ops.o 00:25:48.183 CC lib/ftl/ftl_rq.o 00:25:48.183 CC lib/ftl/ftl_writer.o 00:25:48.183 CC lib/ftl/ftl_reloc.o 00:25:48.183 CC lib/ftl/ftl_l2p_cache.o 00:25:48.183 CC lib/ftl/ftl_p2l.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt_startup.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt_md.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt_misc.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt_band.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:25:48.183 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:25:48.183 CC lib/ftl/utils/ftl_conf.o 00:25:48.183 CC lib/ftl/utils/ftl_md.o 00:25:48.183 CC lib/ftl/utils/ftl_mempool.o 00:25:48.183 CC lib/ftl/utils/ftl_bitmap.o 00:25:48.183 CC lib/ftl/utils/ftl_property.o 00:25:48.183 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:25:48.183 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:25:48.183 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:25:48.442 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:25:48.442 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:25:48.442 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:25:48.442 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:25:48.442 CC lib/ftl/upgrade/ftl_sb_v3.o 00:25:48.442 CC lib/ftl/upgrade/ftl_sb_v5.o 00:25:48.442 CC lib/ftl/nvc/ftl_nvc_dev.o 00:25:48.442 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:25:48.442 CC lib/ftl/base/ftl_base_dev.o 00:25:48.442 CC lib/ftl/base/ftl_base_bdev.o 00:25:50.342 LIB libspdk_nbd.a 00:25:50.600 LIB libspdk_scsi.a 00:25:50.859 LIB libspdk_ftl.a 00:25:51.117 CC lib/iscsi/init_grp.o 00:25:51.117 CC lib/iscsi/conn.o 00:25:51.117 CC lib/iscsi/iscsi.o 00:25:51.117 CC lib/iscsi/param.o 00:25:51.117 CC lib/iscsi/md5.o 00:25:51.117 CC lib/iscsi/iscsi_subsystem.o 00:25:51.117 CC lib/iscsi/iscsi_rpc.o 00:25:51.117 CC lib/iscsi/tgt_node.o 00:25:51.117 CC lib/iscsi/portal_grp.o 00:25:51.117 CC lib/iscsi/task.o 00:25:51.117 CC lib/vhost/vhost.o 00:25:51.117 CC lib/vhost/vhost_rpc.o 00:25:51.117 CC lib/vhost/vhost_scsi.o 00:25:51.117 CC lib/vhost/vhost_blk.o 00:25:51.117 CC lib/vhost/rte_vhost_user.o 00:25:51.683 LIB libspdk_nvmf.a 00:25:53.056 LIB libspdk_vhost.a 00:25:53.056 LIB libspdk_iscsi.a 00:25:57.259 CC module/env_dpdk/env_dpdk_rpc.o 00:25:57.259 CC module/keyring/file/keyring.o 00:25:57.259 CC module/keyring/file/keyring_rpc.o 00:25:57.259 CC module/scheduler/dynamic/scheduler_dynamic.o 00:25:57.259 CC module/accel/ioat/accel_ioat.o 00:25:57.259 CC module/accel/ioat/accel_ioat_rpc.o 00:25:57.259 CC module/keyring/linux/keyring_rpc.o 00:25:57.259 CC module/scheduler/gscheduler/gscheduler.o 00:25:57.259 CC module/blob/bdev/blob_bdev.o 00:25:57.259 CC module/sock/posix/posix.o 00:25:57.259 CC module/keyring/linux/keyring.o 00:25:57.259 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:25:57.259 CC module/accel/error/accel_error.o 00:25:57.259 CC module/accel/error/accel_error_rpc.o 00:25:57.542 LIB libspdk_env_dpdk_rpc.a 00:25:57.542 LIB libspdk_keyring_linux.a 00:25:57.542 LIB libspdk_keyring_file.a 00:25:57.542 LIB libspdk_accel_ioat.a 00:25:57.808 LIB libspdk_scheduler_dpdk_governor.a 00:25:57.808 LIB libspdk_scheduler_gscheduler.a 00:25:57.808 LIB libspdk_scheduler_dynamic.a 00:25:57.808 LIB libspdk_accel_error.a 00:25:57.808 LIB libspdk_blob_bdev.a 00:25:58.375 LIB libspdk_sock_posix.a 00:25:58.375 CC module/bdev/nvme/bdev_nvme.o 00:25:58.375 CC module/bdev/nvme/bdev_nvme_rpc.o 00:25:58.375 CC module/bdev/nvme/nvme_rpc.o 00:25:58.375 CC module/blobfs/bdev/blobfs_bdev.o 00:25:58.375 CC module/bdev/nvme/bdev_mdns_client.o 00:25:58.375 CC module/bdev/nvme/vbdev_opal.o 00:25:58.375 CC module/bdev/nvme/vbdev_opal_rpc.o 00:25:58.375 CC module/bdev/gpt/gpt.o 00:25:58.375 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:25:58.375 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:25:58.375 CC module/bdev/null/bdev_null.o 00:25:58.375 CC module/bdev/gpt/vbdev_gpt.o 00:25:58.375 CC module/bdev/null/bdev_null_rpc.o 00:25:58.375 CC module/bdev/passthru/vbdev_passthru.o 00:25:58.375 CC module/bdev/aio/bdev_aio.o 00:25:58.375 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:25:58.375 CC module/bdev/aio/bdev_aio_rpc.o 00:25:58.375 CC module/bdev/zone_block/vbdev_zone_block.o 00:25:58.375 CC module/bdev/lvol/vbdev_lvol.o 00:25:58.375 CC module/bdev/delay/vbdev_delay.o 00:25:58.375 CC module/bdev/delay/vbdev_delay_rpc.o 00:25:58.375 CC module/bdev/malloc/bdev_malloc.o 00:25:58.375 CC module/bdev/error/vbdev_error.o 00:25:58.375 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:25:58.375 CC module/bdev/malloc/bdev_malloc_rpc.o 00:25:58.375 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:25:58.375 CC module/bdev/error/vbdev_error_rpc.o 00:25:58.375 CC module/bdev/virtio/bdev_virtio_scsi.o 00:25:58.375 CC module/bdev/virtio/bdev_virtio_blk.o 00:25:58.375 CC module/bdev/raid/bdev_raid.o 00:25:58.375 CC module/bdev/virtio/bdev_virtio_rpc.o 00:25:58.375 CC module/bdev/raid/bdev_raid_rpc.o 00:25:58.375 CC module/bdev/ftl/bdev_ftl.o 00:25:58.375 CC module/bdev/raid/bdev_raid_sb.o 00:25:58.375 CC module/bdev/raid/raid0.o 00:25:58.375 CC module/bdev/split/vbdev_split.o 00:25:58.375 CC module/bdev/raid/raid1.o 00:25:58.375 CC module/bdev/ftl/bdev_ftl_rpc.o 00:25:58.375 CC module/bdev/split/vbdev_split_rpc.o 00:25:58.375 CC module/bdev/raid/concat.o 00:25:59.751 LIB libspdk_blobfs_bdev.a 00:25:59.751 LIB libspdk_bdev_ftl.a 00:25:59.751 LIB libspdk_bdev_split.a 00:25:59.751 LIB libspdk_bdev_malloc.a 00:25:59.751 LIB libspdk_bdev_passthru.a 00:25:59.751 LIB libspdk_bdev_zone_block.a 00:25:59.751 LIB libspdk_bdev_gpt.a 00:25:59.751 LIB libspdk_bdev_null.a 00:25:59.751 LIB libspdk_bdev_error.a 00:26:00.008 LIB libspdk_bdev_aio.a 00:26:00.008 LIB libspdk_bdev_delay.a 00:26:00.008 LIB libspdk_bdev_virtio.a 00:26:00.266 LIB libspdk_bdev_lvol.a 00:26:00.525 LIB libspdk_bdev_raid.a 00:26:01.898 LIB libspdk_bdev_nvme.a 00:26:03.833 CC module/event/subsystems/vmd/vmd.o 00:26:03.833 CC module/event/subsystems/vmd/vmd_rpc.o 00:26:03.833 CC module/event/subsystems/scheduler/scheduler.o 00:26:03.833 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:26:03.833 CC module/event/subsystems/iobuf/iobuf.o 00:26:03.833 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:26:03.833 CC module/event/subsystems/sock/sock.o 00:26:03.833 CC module/event/subsystems/keyring/keyring.o 00:26:03.833 LIB libspdk_event_vhost_blk.a 00:26:03.833 LIB libspdk_event_scheduler.a 00:26:03.833 LIB libspdk_event_sock.a 00:26:03.833 LIB libspdk_event_keyring.a 00:26:03.833 LIB libspdk_event_vmd.a 00:26:03.833 LIB libspdk_event_iobuf.a 00:26:04.400 CC module/event/subsystems/accel/accel.o 00:26:04.658 LIB libspdk_event_accel.a 00:26:04.917 CC module/event/subsystems/bdev/bdev.o 00:26:05.182 LIB libspdk_event_bdev.a 00:26:05.748 CC module/event/subsystems/scsi/scsi.o 00:26:05.748 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:26:05.748 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:26:05.748 CC module/event/subsystems/nbd/nbd.o 00:26:06.007 LIB libspdk_event_scsi.a 00:26:06.007 LIB libspdk_event_nbd.a 00:26:06.007 LIB libspdk_event_nvmf.a 00:26:06.265 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:26:06.265 CC module/event/subsystems/iscsi/iscsi.o 00:26:06.525 LIB libspdk_event_vhost_scsi.a 00:26:06.525 LIB libspdk_event_iscsi.a 00:26:06.784 make[1]: Nothing to be done for 'all'. 00:26:07.056 CC app/spdk_nvme_identify/identify.o 00:26:07.056 CC app/spdk_lspci/spdk_lspci.o 00:26:07.056 CC app/spdk_nvme_perf/perf.o 00:26:07.056 CXX app/trace/trace.o 00:26:07.056 CC app/spdk_top/spdk_top.o 00:26:07.056 CC app/trace_record/trace_record.o 00:26:07.056 CC app/spdk_nvme_discover/discovery_aer.o 00:26:07.056 CC app/nvmf_tgt/nvmf_main.o 00:26:07.056 CC app/spdk_dd/spdk_dd.o 00:26:07.056 CC app/spdk_tgt/spdk_tgt.o 00:26:07.056 CC app/iscsi_tgt/iscsi_tgt.o 00:26:07.056 CC examples/interrupt_tgt/interrupt_tgt.o 00:26:07.315 CC examples/ioat/verify/verify.o 00:26:07.315 CC examples/util/zipf/zipf.o 00:26:07.315 CC examples/ioat/perf/perf.o 00:26:07.315 LINK spdk_lspci 00:26:07.574 LINK spdk_tgt 00:26:07.574 LINK spdk_trace_record 00:26:07.574 LINK spdk_nvme_discover 00:26:07.574 LINK zipf 00:26:07.574 LINK interrupt_tgt 00:26:07.574 LINK nvmf_tgt 00:26:07.574 LINK iscsi_tgt 00:26:07.574 LINK verify 00:26:07.833 LINK ioat_perf 00:26:07.833 LINK spdk_dd 00:26:08.091 LINK spdk_trace 00:26:09.028 LINK spdk_nvme_perf 00:26:09.028 LINK spdk_top 00:26:09.028 LINK spdk_nvme_identify 00:26:09.963 CC app/vhost/vhost.o 00:26:10.220 LINK vhost 00:26:12.753 CC examples/vmd/led/led.o 00:26:12.753 CC examples/sock/hello_world/hello_sock.o 00:26:12.753 CC examples/vmd/lsvmd/lsvmd.o 00:26:12.753 CC examples/thread/thread/thread_ex.o 00:26:13.011 LINK lsvmd 00:26:13.011 LINK led 00:26:13.275 LINK hello_sock 00:26:13.275 LINK thread 00:26:21.396 CC examples/nvme/hello_world/hello_world.o 00:26:21.396 CC examples/nvme/reconnect/reconnect.o 00:26:21.396 CC examples/nvme/abort/abort.o 00:26:21.396 CC examples/nvme/nvme_manage/nvme_manage.o 00:26:21.396 CC examples/nvme/arbitration/arbitration.o 00:26:21.396 CC examples/nvme/hotplug/hotplug.o 00:26:21.396 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:26:21.396 CC examples/nvme/cmb_copy/cmb_copy.o 00:26:21.396 LINK cmb_copy 00:26:21.396 LINK pmr_persistence 00:26:21.396 LINK hello_world 00:26:21.396 LINK hotplug 00:26:21.396 LINK abort 00:26:21.396 LINK reconnect 00:26:21.396 LINK arbitration 00:26:21.654 LINK nvme_manage 00:26:31.629 CC examples/accel/perf/accel_perf.o 00:26:31.629 CC examples/blob/cli/blobcli.o 00:26:31.629 CC examples/blob/hello_world/hello_blob.o 00:26:32.202 LINK hello_blob 00:26:34.108 LINK accel_perf 00:26:34.108 LINK blobcli 00:26:38.300 CC examples/bdev/hello_world/hello_bdev.o 00:26:38.300 CC examples/bdev/bdevperf/bdevperf.o 00:26:38.301 LINK hello_bdev 00:26:39.236 LINK bdevperf 00:26:49.252 CC examples/nvmf/nvmf/nvmf.o 00:26:49.252 LINK nvmf 00:26:55.876 make: Leaving directory '/mnt/sdadir/spdk' 00:26:55.876 05:17:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@101 -- # rm -rf /mnt/sdadir/spdk 00:27:42.586 05:18:35 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@102 -- # umount /mnt/sdadir 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@103 -- # rm -rf /mnt/sdadir 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # stats=($(cat "/sys/block/$dev/stat")) 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # cat /sys/block/sda/stat 00:27:42.586 READ IO cnt: 100 merges: 0 sectors: 3336 ticks: 69 00:27:42.586 WRITE IO cnt: 621593 merges: 592650 sectors: 10178664 ticks: 601150 00:27:42.586 in flight: 0 io ticks: 246774 time in queue: 654333 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@107 -- # printf 'READ IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 100 0 3336 69 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@109 -- # printf 'WRITE IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 621593 592650 10178664 601150 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@111 -- # printf 'in flight: % 8u io ticks: % 8u time in queue: % 8u\n' 0 246774 654333 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@1 -- # cleanup 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_delete Nvme0n1 00:27:42.586 [2024-07-23 05:18:36.300910] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1p0) received event(SPDK_BDEV_EVENT_REMOVE) 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@13 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_delete EE_Malloc0 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@15 -- # killprocess 93157 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@948 -- # '[' -z 93157 ']' 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@952 -- # kill -0 93157 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@953 -- # uname 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93157 00:27:42.586 killing process with pid 93157 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93157' 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@967 -- # kill 93157 00:27:42.586 05:18:36 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@972 -- # wait 93157 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@17 -- # mountpoint -q /mnt/sdadir 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@18 -- # rm -rf /mnt/sdadir 00:27:42.586 Cleaning up iSCSI connection 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@20 -- # iscsicleanup 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:27:42.586 Logging out of session [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:27:42.586 Logout of [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # rm -rf 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@21 -- # iscsitestfini 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:27:42.586 00:27:42.586 real 6m12.313s 00:27:42.586 user 11m22.384s 00:27:42.586 sys 2m38.176s 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:27:42.586 ************************************ 00:27:42.586 END TEST iscsi_tgt_ext4test 00:27:42.586 ************************************ 00:27:42.586 05:18:37 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:27:42.586 05:18:37 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@49 -- # '[' 1 -eq 1 ']' 00:27:42.586 05:18:37 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@50 -- # hash ceph 00:27:42.586 05:18:37 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@54 -- # run_test iscsi_tgt_rbd /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:27:42.586 05:18:37 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:42.586 05:18:37 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.586 05:18:37 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:27:42.586 ************************************ 00:27:42.586 START TEST iscsi_tgt_rbd 00:27:42.586 ************************************ 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:27:42.586 * Looking for test storage... 00:27:42.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@11 -- # iscsitestinit 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:27:42.586 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@13 -- # timing_enter rbd_setup 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@14 -- # rbd_setup 10.0.0.1 spdk_iscsi_ns 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1005 -- # '[' -z 10.0.0.1 ']' 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1009 -- # '[' -n spdk_iscsi_ns ']' 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1010 -- # ip netns list 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1010 -- # grep spdk_iscsi_ns 00:27:42.587 spdk_iscsi_ns (id: 0) 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1011 -- # NS_CMD='ip netns exec spdk_iscsi_ns' 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:27:42.587 05:18:37 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1022 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:27:42.587 + base_dir=/var/tmp/ceph 00:27:42.587 + image=/var/tmp/ceph/ceph_raw.img 00:27:42.587 + dev=/dev/loop200 00:27:42.587 + pkill -9 ceph 00:27:42.587 + sleep 3 00:27:42.587 + umount /dev/loop200p2 00:27:42.587 umount: /dev/loop200p2: no mount point specified. 00:27:42.587 + losetup -d /dev/loop200 00:27:42.587 losetup: /dev/loop200: failed to use device: No such device 00:27:42.587 + rm -rf /var/tmp/ceph 00:27:42.587 05:18:40 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1023 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 10.0.0.1 00:27:42.587 + set -e 00:27:42.587 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:27:42.587 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:27:42.587 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:27:42.587 + base_dir=/var/tmp/ceph 00:27:42.587 + mon_ip=10.0.0.1 00:27:42.587 + mon_dir=/var/tmp/ceph/mon.a 00:27:42.587 + pid_dir=/var/tmp/ceph/pid 00:27:42.587 + ceph_conf=/var/tmp/ceph/ceph.conf 00:27:42.587 + mnt_dir=/var/tmp/ceph/mnt 00:27:42.587 + image=/var/tmp/ceph_raw.img 00:27:42.587 + dev=/dev/loop200 00:27:42.587 + modprobe loop 00:27:42.587 + umount /dev/loop200p2 00:27:42.587 umount: /dev/loop200p2: no mount point specified. 00:27:42.587 + true 00:27:42.587 + losetup -d /dev/loop200 00:27:42.587 losetup: /dev/loop200: failed to use device: No such device 00:27:42.587 + true 00:27:42.587 + '[' -d /var/tmp/ceph ']' 00:27:42.587 + mkdir /var/tmp/ceph 00:27:42.587 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:27:42.587 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:27:42.587 + fallocate -l 4G /var/tmp/ceph_raw.img 00:27:42.587 + mknod /dev/loop200 b 7 200 00:27:42.587 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:27:42.587 + PARTED='parted -s' 00:27:42.587 + SGDISK=sgdisk 00:27:42.587 Partitioning /dev/loop200 00:27:42.587 + echo 'Partitioning /dev/loop200' 00:27:42.587 + parted -s /dev/loop200 mktable gpt 00:27:42.587 + sleep 2 00:27:42.587 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:27:42.845 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:27:42.845 + partno=0 00:27:42.845 + echo 'Setting name on /dev/loop200' 00:27:42.845 Setting name on /dev/loop200 00:27:42.845 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:27:43.780 Warning: The kernel is still using the old partition table. 00:27:43.780 The new table will be used at the next reboot or after you 00:27:43.780 run partprobe(8) or kpartx(8) 00:27:43.780 The operation has completed successfully. 00:27:43.780 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:27:44.717 Warning: The kernel is still using the old partition table. 00:27:44.717 The new table will be used at the next reboot or after you 00:27:44.717 run partprobe(8) or kpartx(8) 00:27:44.717 The operation has completed successfully. 00:27:44.717 + kpartx /dev/loop200 00:27:44.717 loop200p1 : 0 4192256 /dev/loop200 2048 00:27:44.717 loop200p2 : 0 4192256 /dev/loop200 4194304 00:27:44.717 ++ ceph -v 00:27:44.717 ++ awk '{print $3}' 00:27:44.975 + ceph_version=17.2.7 00:27:44.975 + ceph_maj=17 00:27:44.975 + '[' 17 -gt 12 ']' 00:27:44.975 + update_config=true 00:27:44.975 + rm -f /var/log/ceph/ceph-mon.a.log 00:27:44.975 + set_min_mon_release='--set-min-mon-release 14' 00:27:44.975 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:27:44.975 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:27:44.975 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:27:44.975 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:27:44.975 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:27:44.975 = sectsz=512 attr=2, projid32bit=1 00:27:44.975 = crc=1 finobt=1, sparse=1, rmapbt=0 00:27:44.975 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:27:44.975 data = bsize=4096 blocks=524032, imaxpct=25 00:27:44.975 = sunit=0 swidth=0 blks 00:27:44.975 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:27:44.975 log =internal log bsize=4096 blocks=16384, version=2 00:27:44.975 = sectsz=512 sunit=0 blks, lazy-count=1 00:27:44.975 realtime =none extsz=4096 blocks=0, rtextents=0 00:27:44.975 Discarding blocks...Done. 00:27:44.975 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:27:44.975 + cat 00:27:44.975 + rm -rf '/var/tmp/ceph/mon.a/*' 00:27:44.975 + mkdir -p /var/tmp/ceph/mon.a 00:27:44.975 + mkdir -p /var/tmp/ceph/pid 00:27:44.975 + rm -f /etc/ceph/ceph.client.admin.keyring 00:27:44.975 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:27:44.975 creating /var/tmp/ceph/keyring 00:27:44.975 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:27:45.234 + monmaptool --create --clobber --add a 10.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:27:45.234 monmaptool: monmap file /var/tmp/ceph/monmap 00:27:45.234 monmaptool: generated fsid f92e69c0-84eb-4eff-9413-f86844432e2e 00:27:45.234 setting min_mon_release = octopus 00:27:45.234 epoch 0 00:27:45.234 fsid f92e69c0-84eb-4eff-9413-f86844432e2e 00:27:45.234 last_changed 2024-07-23T05:18:45.230711+0000 00:27:45.234 created 2024-07-23T05:18:45.230711+0000 00:27:45.234 min_mon_release 15 (octopus) 00:27:45.234 election_strategy: 1 00:27:45.234 0: v2:10.0.0.1:12046/0 mon.a 00:27:45.234 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:27:45.234 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:27:45.234 + '[' true = true ']' 00:27:45.234 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:27:45.234 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:27:45.234 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:27:45.234 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:27:45.234 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:27:45.234 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:27:45.234 ++ hostname 00:27:45.234 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:27:45.492 + true 00:27:45.492 + '[' true = true ']' 00:27:45.492 + ceph-conf --name mon.a --show-config-value log_file 00:27:45.492 /var/log/ceph/ceph-mon.a.log 00:27:45.492 ++ ceph -s 00:27:45.492 ++ grep id 00:27:45.492 ++ awk '{print $2}' 00:27:45.751 + fsid=f92e69c0-84eb-4eff-9413-f86844432e2e 00:27:45.751 + sed -i 's/perf = true/perf = true\n\tfsid = f92e69c0-84eb-4eff-9413-f86844432e2e \n/g' /var/tmp/ceph/ceph.conf 00:27:45.751 + (( ceph_maj < 18 )) 00:27:45.751 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:27:45.751 + cat /var/tmp/ceph/ceph.conf 00:27:45.751 [global] 00:27:45.751 debug_lockdep = 0/0 00:27:45.751 debug_context = 0/0 00:27:45.751 debug_crush = 0/0 00:27:45.751 debug_buffer = 0/0 00:27:45.751 debug_timer = 0/0 00:27:45.751 debug_filer = 0/0 00:27:45.751 debug_objecter = 0/0 00:27:45.751 debug_rados = 0/0 00:27:45.751 debug_rbd = 0/0 00:27:45.751 debug_ms = 0/0 00:27:45.751 debug_monc = 0/0 00:27:45.751 debug_tp = 0/0 00:27:45.751 debug_auth = 0/0 00:27:45.751 debug_finisher = 0/0 00:27:45.751 debug_heartbeatmap = 0/0 00:27:45.751 debug_perfcounter = 0/0 00:27:45.751 debug_asok = 0/0 00:27:45.751 debug_throttle = 0/0 00:27:45.751 debug_mon = 0/0 00:27:45.751 debug_paxos = 0/0 00:27:45.751 debug_rgw = 0/0 00:27:45.751 00:27:45.751 perf = true 00:27:45.751 osd objectstore = filestore 00:27:45.751 00:27:45.751 fsid = f92e69c0-84eb-4eff-9413-f86844432e2e 00:27:45.751 00:27:45.751 mutex_perf_counter = false 00:27:45.751 throttler_perf_counter = false 00:27:45.751 rbd cache = false 00:27:45.751 mon_allow_pool_delete = true 00:27:45.751 00:27:45.751 osd_pool_default_size = 1 00:27:45.751 00:27:45.751 [mon] 00:27:45.751 mon_max_pool_pg_num=166496 00:27:45.751 mon_osd_max_split_count = 10000 00:27:45.751 mon_pg_warn_max_per_osd = 10000 00:27:45.751 00:27:45.751 [osd] 00:27:45.751 osd_op_threads = 64 00:27:45.751 filestore_queue_max_ops=5000 00:27:45.751 filestore_queue_committing_max_ops=5000 00:27:45.751 journal_max_write_entries=1000 00:27:45.751 journal_queue_max_ops=3000 00:27:45.751 objecter_inflight_ops=102400 00:27:45.751 filestore_wbthrottle_enable=false 00:27:45.751 filestore_queue_max_bytes=1048576000 00:27:45.751 filestore_queue_committing_max_bytes=1048576000 00:27:45.751 journal_max_write_bytes=1048576000 00:27:45.751 journal_queue_max_bytes=1048576000 00:27:45.751 ms_dispatch_throttle_bytes=1048576000 00:27:45.751 objecter_inflight_op_bytes=1048576000 00:27:45.751 filestore_max_sync_interval=10 00:27:45.751 osd_client_message_size_cap = 0 00:27:45.751 osd_client_message_cap = 0 00:27:45.751 osd_enable_op_tracker = false 00:27:45.751 filestore_fd_cache_size = 10240 00:27:45.751 filestore_fd_cache_shards = 64 00:27:45.751 filestore_op_threads = 16 00:27:45.751 osd_op_num_shards = 48 00:27:45.751 osd_op_num_threads_per_shard = 2 00:27:45.751 osd_pg_object_context_cache_count = 10240 00:27:45.751 filestore_odsync_write = True 00:27:45.751 journal_dynamic_throttle = True 00:27:45.751 00:27:45.751 [osd.0] 00:27:45.751 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:27:45.751 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:27:45.751 00:27:45.751 # add mon address 00:27:45.751 [mon.a] 00:27:45.751 mon addr = v2:10.0.0.1:12046 00:27:45.751 + i=0 00:27:45.751 + mkdir -p /var/tmp/ceph/mnt 00:27:45.751 ++ uuidgen 00:27:45.751 + uuid=379c4d34-d972-4a72-a93d-d43e13604ab0 00:27:45.751 + ceph -c /var/tmp/ceph/ceph.conf osd create 379c4d34-d972-4a72-a93d-d43e13604ab0 0 00:27:46.010 0 00:27:46.010 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 379c4d34-d972-4a72-a93d-d43e13604ab0 --check-needs-journal --no-mon-config 00:27:46.010 2024-07-23T05:18:46.189+0000 7f3f74c3b400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:27:46.010 2024-07-23T05:18:46.190+0000 7f3f74c3b400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:27:46.268 2024-07-23T05:18:46.246+0000 7f3f74c3b400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 379c4d34-d972-4a72-a93d-d43e13604ab0, invalid (someone else's?) journal 00:27:46.268 2024-07-23T05:18:46.289+0000 7f3f74c3b400 -1 journal do_read_entry(4096): bad header magic 00:27:46.268 2024-07-23T05:18:46.289+0000 7f3f74c3b400 -1 journal do_read_entry(4096): bad header magic 00:27:46.268 ++ hostname 00:27:46.268 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:27:47.646 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:27:47.646 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:27:47.646 added key for osd.0 00:27:47.905 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:27:48.164 + class_dir=/lib64/rados-classes 00:27:48.164 + [[ -e /lib64/rados-classes ]] 00:27:48.164 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:27:48.422 + pkill -9 ceph-osd 00:27:48.422 + true 00:27:48.422 + sleep 2 00:27:50.334 + mkdir -p /var/tmp/ceph/pid 00:27:50.334 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:27:50.334 2024-07-23T05:18:50.536+0000 7fcd09656400 -1 Falling back to public interface 00:27:50.592 2024-07-23T05:18:50.586+0000 7fcd09656400 -1 journal do_read_entry(8192): bad header magic 00:27:50.592 2024-07-23T05:18:50.586+0000 7fcd09656400 -1 journal do_read_entry(8192): bad header magic 00:27:50.592 2024-07-23T05:18:50.615+0000 7fcd09656400 -1 osd.0 0 log_to_monitors true 00:27:51.549 05:18:51 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1025 -- # ip netns exec spdk_iscsi_ns ceph osd pool create rbd 128 00:27:52.485 pool 'rbd' created 00:27:52.485 05:18:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1026 -- # ip netns exec spdk_iscsi_ns rbd create foo --size 1000 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@15 -- # trap 'rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@16 -- # timing_exit rbd_setup 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@20 -- # timing_enter start_iscsi_tgt 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@23 -- # pid=133010 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@22 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@25 -- # trap 'killprocess $pid; rbd_cleanup; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@27 -- # waitforlisten 133010 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@829 -- # '[' -z 133010 ']' 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:57.854 05:18:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:57.854 [2024-07-23 05:18:57.799558] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:27:57.854 [2024-07-23 05:18:57.799657] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133010 ] 00:27:57.854 [2024-07-23 05:18:57.940985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:57.854 [2024-07-23 05:18:58.042697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.854 [2024-07-23 05:18:58.042840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.854 [2024-07-23 05:18:58.042969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.854 [2024-07-23 05:18:58.043090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.789 05:18:58 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:58.789 05:18:58 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@862 -- # return 0 00:27:58.789 05:18:58 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@28 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:27:58.789 05:18:58 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.789 05:18:58 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:58.789 05:18:58 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.789 05:18:58 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@29 -- # rpc_cmd framework_start_init 00:27:58.789 05:18:58 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.789 05:18:58 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.047 iscsi_tgt is listening. Running tests... 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@32 -- # timing_exit start_iscsi_tgt 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rpc_cmd bdev_rbd_register_cluster iscsi_rbd_cluster --key-file /etc/ceph/ceph.client.admin.keyring --config-file /etc/ceph/ceph.conf 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rbd_cluster_name=iscsi_rbd_cluster 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@37 -- # rpc_cmd bdev_rbd_get_clusters_info -b iscsi_rbd_cluster 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:59.047 { 00:27:59.047 "cluster_name": "iscsi_rbd_cluster", 00:27:59.047 "config_file": "/etc/ceph/ceph.conf", 00:27:59.047 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:27:59.047 } 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rpc_cmd bdev_rbd_create rbd foo 4096 -c iscsi_rbd_cluster 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:59.047 [2024-07-23 05:18:59.140944] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rbd_bdev=Ceph0 00:27:59.047 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@39 -- # rpc_cmd bdev_get_bdevs 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:59.048 [ 00:27:59.048 { 00:27:59.048 "name": "Ceph0", 00:27:59.048 "aliases": [ 00:27:59.048 "409d517e-d0ee-459f-844a-13775b980d96" 00:27:59.048 ], 00:27:59.048 "product_name": "Ceph Rbd Disk", 00:27:59.048 "block_size": 4096, 00:27:59.048 "num_blocks": 256000, 00:27:59.048 "uuid": "409d517e-d0ee-459f-844a-13775b980d96", 00:27:59.048 "assigned_rate_limits": { 00:27:59.048 "rw_ios_per_sec": 0, 00:27:59.048 "rw_mbytes_per_sec": 0, 00:27:59.048 "r_mbytes_per_sec": 0, 00:27:59.048 "w_mbytes_per_sec": 0 00:27:59.048 }, 00:27:59.048 "claimed": false, 00:27:59.048 "zoned": false, 00:27:59.048 "supported_io_types": { 00:27:59.048 "read": true, 00:27:59.048 "write": true, 00:27:59.048 "unmap": true, 00:27:59.048 "flush": true, 00:27:59.048 "reset": true, 00:27:59.048 "nvme_admin": false, 00:27:59.048 "nvme_io": false, 00:27:59.048 "nvme_io_md": false, 00:27:59.048 "write_zeroes": true, 00:27:59.048 "zcopy": false, 00:27:59.048 "get_zone_info": false, 00:27:59.048 "zone_management": false, 00:27:59.048 "zone_append": false, 00:27:59.048 "compare": false, 00:27:59.048 "compare_and_write": true, 00:27:59.048 "abort": false, 00:27:59.048 "seek_hole": false, 00:27:59.048 "seek_data": false, 00:27:59.048 "copy": false, 00:27:59.048 "nvme_iov_md": false 00:27:59.048 }, 00:27:59.048 "driver_specific": { 00:27:59.048 "rbd": { 00:27:59.048 "pool_name": "rbd", 00:27:59.048 "rbd_name": "foo", 00:27:59.048 "config_file": "/etc/ceph/ceph.conf", 00:27:59.048 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:27:59.048 } 00:27:59.048 } 00:27:59.048 } 00:27:59.048 ] 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@41 -- # rpc_cmd bdev_rbd_resize Ceph0 2000 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:59.048 true 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # rpc_cmd bdev_get_bdevs 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # grep num_blocks 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # sed 's/[^[:digit:]]//g' 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # num_block=512000 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@44 -- # total_size=2000 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@45 -- # '[' 2000 '!=' 2000 ']' 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@53 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Ceph0:0 1:2 64 -d 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.048 05:18:59 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@54 -- # sleep 1 00:28:00.048 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@56 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:28:00.048 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:28:00.048 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@57 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:28:00.048 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:28:00.048 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:28:00.048 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@58 -- # waitforiscsidevices 1 00:28:00.048 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@116 -- # local num=1 00:28:00.048 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:28:00.048 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:28:00.048 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:28:00.048 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:28:00.306 [2024-07-23 05:19:00.267474] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:28:00.306 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # n=1 00:28:00.306 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:28:00.306 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@123 -- # return 0 00:28:00.306 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@60 -- # trap 'iscsicleanup; killprocess $pid; rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:00.306 05:19:00 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:28:00.306 [global] 00:28:00.306 thread=1 00:28:00.306 invalidate=1 00:28:00.306 rw=randrw 00:28:00.306 time_based=1 00:28:00.306 runtime=1 00:28:00.306 ioengine=libaio 00:28:00.306 direct=1 00:28:00.306 bs=4096 00:28:00.306 iodepth=1 00:28:00.306 norandommap=0 00:28:00.306 numjobs=1 00:28:00.306 00:28:00.306 verify_dump=1 00:28:00.306 verify_backlog=512 00:28:00.306 verify_state_save=0 00:28:00.306 do_verify=1 00:28:00.306 verify=crc32c-intel 00:28:00.306 [job0] 00:28:00.306 filename=/dev/sda 00:28:00.306 queue_depth set to 113 (sda) 00:28:00.306 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:00.306 fio-3.35 00:28:00.306 Starting 1 thread 00:28:00.306 [2024-07-23 05:19:00.434651] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:28:01.681 [2024-07-23 05:19:01.553250] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:28:01.681 00:28:01.681 job0: (groupid=0, jobs=1): err= 0: pid=133124: Tue Jul 23 05:19:01 2024 00:28:01.681 read: IOPS=77, BW=310KiB/s (317kB/s)(312KiB/1008msec) 00:28:01.681 slat (usec): min=14, max=461, avg=38.24, stdev=49.82 00:28:01.681 clat (usec): min=6, max=1139, avg=305.89, stdev=202.19 00:28:01.681 lat (usec): min=148, max=1177, avg=344.13, stdev=203.57 00:28:01.681 clat percentiles (usec): 00:28:01.681 | 1.00th=[ 7], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 180], 00:28:01.681 | 30.00th=[ 202], 40.00th=[ 227], 50.00th=[ 245], 60.00th=[ 260], 00:28:01.681 | 70.00th=[ 293], 80.00th=[ 371], 90.00th=[ 562], 95.00th=[ 807], 00:28:01.681 | 99.00th=[ 1139], 99.50th=[ 1139], 99.90th=[ 1139], 99.95th=[ 1139], 00:28:01.681 | 99.99th=[ 1139] 00:28:01.681 bw ( KiB/s): min= 263, max= 360, per=100.00%, avg=311.50, stdev=68.59, samples=2 00:28:01.681 iops : min= 65, max= 90, avg=77.50, stdev=17.68, samples=2 00:28:01.681 write: IOPS=78, BW=313KiB/s (321kB/s)(316KiB/1008msec); 0 zone resets 00:28:01.681 slat (nsec): min=19096, max=95856, avg=36779.86, stdev=12203.08 00:28:01.681 clat (usec): min=3895, max=35792, avg=12367.64, stdev=4006.67 00:28:01.681 lat (usec): min=3929, max=35815, avg=12404.42, stdev=4006.76 00:28:01.681 clat percentiles (usec): 00:28:01.681 | 1.00th=[ 3884], 5.00th=[ 4293], 10.00th=[ 8979], 20.00th=[10552], 00:28:01.681 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12256], 60.00th=[12780], 00:28:01.681 | 70.00th=[13173], 80.00th=[13829], 90.00th=[15270], 95.00th=[17695], 00:28:01.681 | 99.00th=[35914], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:28:01.681 | 99.99th=[35914] 00:28:01.681 bw ( KiB/s): min= 311, max= 312, per=99.21%, avg=311.50, stdev= 0.71, samples=2 00:28:01.681 iops : min= 77, max= 78, avg=77.50, stdev= 0.71, samples=2 00:28:01.681 lat (usec) : 10=0.64%, 250=24.84%, 500=18.47%, 750=2.55%, 1000=1.91% 00:28:01.681 lat (msec) : 2=1.27%, 4=1.27%, 10=5.73%, 20=42.04%, 50=1.27% 00:28:01.681 cpu : usr=0.30%, sys=0.50%, ctx=157, majf=0, minf=1 00:28:01.681 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:01.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.681 issued rwts: total=78,79,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.681 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:01.681 00:28:01.681 Run status group 0 (all jobs): 00:28:01.681 READ: bw=310KiB/s (317kB/s), 310KiB/s-310KiB/s (317kB/s-317kB/s), io=312KiB (319kB), run=1008-1008msec 00:28:01.681 WRITE: bw=313KiB/s (321kB/s), 313KiB/s-313KiB/s (321kB/s-321kB/s), io=316KiB (324kB), run=1008-1008msec 00:28:01.681 00:28:01.681 Disk stats (read/write): 00:28:01.681 sda: ios=115/70, merge=0/0, ticks=29/848, in_queue=878, util=90.69% 00:28:01.681 05:19:01 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:28:01.681 [global] 00:28:01.681 thread=1 00:28:01.681 invalidate=1 00:28:01.681 rw=randrw 00:28:01.681 time_based=1 00:28:01.681 runtime=1 00:28:01.681 ioengine=libaio 00:28:01.681 direct=1 00:28:01.681 bs=131072 00:28:01.681 iodepth=32 00:28:01.681 norandommap=0 00:28:01.681 numjobs=1 00:28:01.681 00:28:01.681 verify_dump=1 00:28:01.681 verify_backlog=512 00:28:01.681 verify_state_save=0 00:28:01.681 do_verify=1 00:28:01.681 verify=crc32c-intel 00:28:01.681 [job0] 00:28:01.681 filename=/dev/sda 00:28:01.681 queue_depth set to 113 (sda) 00:28:01.681 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:28:01.681 fio-3.35 00:28:01.681 Starting 1 thread 00:28:01.681 [2024-07-23 05:19:01.747549] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:28:03.584 [2024-07-23 05:19:03.376383] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:28:03.584 00:28:03.584 job0: (groupid=0, jobs=1): err= 0: pid=133176: Tue Jul 23 05:19:03 2024 00:28:03.584 read: IOPS=100, BW=12.5MiB/s (13.2MB/s)(19.0MiB/1514msec) 00:28:03.584 slat (usec): min=8, max=521, avg=53.72, stdev=72.68 00:28:03.584 clat (usec): min=146, max=27946, avg=1414.67, stdev=2652.75 00:28:03.584 lat (usec): min=244, max=27981, avg=1468.39, stdev=2644.55 00:28:03.584 clat percentiles (usec): 00:28:03.584 | 1.00th=[ 215], 5.00th=[ 231], 10.00th=[ 243], 20.00th=[ 273], 00:28:03.584 | 30.00th=[ 314], 40.00th=[ 408], 50.00th=[ 498], 60.00th=[ 865], 00:28:03.584 | 70.00th=[ 1205], 80.00th=[ 1762], 90.00th=[ 5014], 95.00th=[ 5407], 00:28:03.584 | 99.00th=[ 5997], 99.50th=[27919], 99.90th=[27919], 99.95th=[27919], 00:28:03.584 | 99.99th=[27919] 00:28:03.584 bw ( KiB/s): min= 6656, max=32191, per=100.00%, avg=19423.50, stdev=18055.97, samples=2 00:28:03.584 iops : min= 52, max= 251, avg=151.50, stdev=140.71, samples=2 00:28:03.584 write: IOPS=99, BW=12.4MiB/s (13.0MB/s)(18.8MiB/1514msec); 0 zone resets 00:28:03.584 slat (usec): min=46, max=714, avg=123.33, stdev=91.11 00:28:03.584 clat (msec): min=17, max=1042, avg=317.66, stdev=280.62 00:28:03.584 lat (msec): min=17, max=1042, avg=317.79, stdev=280.62 00:28:03.584 clat percentiles (msec): 00:28:03.584 | 1.00th=[ 22], 5.00th=[ 35], 10.00th=[ 63], 20.00th=[ 107], 00:28:03.584 | 30.00th=[ 122], 40.00th=[ 126], 50.00th=[ 131], 60.00th=[ 279], 00:28:03.584 | 70.00th=[ 502], 80.00th=[ 558], 90.00th=[ 743], 95.00th=[ 894], 00:28:03.584 | 99.00th=[ 1020], 99.50th=[ 1045], 99.90th=[ 1045], 99.95th=[ 1045], 00:28:03.584 | 99.99th=[ 1045] 00:28:03.584 bw ( KiB/s): min= 256, max=22483, per=79.95%, avg=10139.67, stdev=11315.80, samples=3 00:28:03.584 iops : min= 2, max= 175, avg=79.00, stdev=88.05, samples=3 00:28:03.584 lat (usec) : 250=7.62%, 500=17.55%, 750=3.31%, 1000=3.97% 00:28:03.585 lat (msec) : 2=9.27%, 4=3.31%, 10=4.97%, 20=0.33%, 50=3.31% 00:28:03.585 lat (msec) : 100=5.96%, 250=19.54%, 500=5.63%, 750=10.26%, 1000=3.97% 00:28:03.585 lat (msec) : 2000=0.99% 00:28:03.585 cpu : usr=0.79%, sys=0.46%, ctx=392, majf=0, minf=1 00:28:03.585 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=89.7%, >=64=0.0% 00:28:03.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.585 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.4%, 64=0.0%, >=64=0.0% 00:28:03.585 issued rwts: total=152,150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.585 latency : target=0, window=0, percentile=100.00%, depth=32 00:28:03.585 00:28:03.585 Run status group 0 (all jobs): 00:28:03.585 READ: bw=12.5MiB/s (13.2MB/s), 12.5MiB/s-12.5MiB/s (13.2MB/s-13.2MB/s), io=19.0MiB (19.9MB), run=1514-1514msec 00:28:03.585 WRITE: bw=12.4MiB/s (13.0MB/s), 12.4MiB/s-12.4MiB/s (13.0MB/s-13.0MB/s), io=18.8MiB (19.7MB), run=1514-1514msec 00:28:03.585 00:28:03.585 Disk stats (read/write): 00:28:03.585 sda: ios=200/143, merge=0/0, ticks=210/35764, in_queue=35974, util=93.55% 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@65 -- # rm -f ./local-job0-0-verify.state 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@67 -- # trap - SIGINT SIGTERM EXIT 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@69 -- # iscsicleanup 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:28:03.585 Cleaning up iSCSI connection 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:28:03.585 Logging out of session [sid: 73, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:28:03.585 Logout of [sid: 73, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@983 -- # rm -rf 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@70 -- # rpc_cmd bdev_rbd_delete Ceph0 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:28:03.585 [2024-07-23 05:19:03.499416] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Ceph0) received event(SPDK_BDEV_EVENT_REMOVE) 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@71 -- # rpc_cmd bdev_rbd_unregister_cluster iscsi_rbd_cluster 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@72 -- # killprocess 133010 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@948 -- # '[' -z 133010 ']' 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@952 -- # kill -0 133010 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@953 -- # uname 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133010 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:03.585 killing process with pid 133010 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133010' 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@967 -- # kill 133010 00:28:03.585 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@972 -- # wait 133010 00:28:03.844 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@73 -- # rbd_cleanup 00:28:03.844 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:28:03.844 05:19:03 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:28:03.844 + base_dir=/var/tmp/ceph 00:28:03.844 + image=/var/tmp/ceph/ceph_raw.img 00:28:03.844 + dev=/dev/loop200 00:28:03.844 + pkill -9 ceph 00:28:03.844 + sleep 3 00:28:07.152 + umount /dev/loop200p2 00:28:07.153 umount: /dev/loop200p2: not mounted. 00:28:07.153 + losetup -d /dev/loop200 00:28:07.153 + rm -rf /var/tmp/ceph 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@75 -- # iscsitestfini 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:28:07.153 00:28:07.153 real 0m29.453s 00:28:07.153 user 0m25.762s 00:28:07.153 sys 0m1.776s 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:07.153 ************************************ 00:28:07.153 END TEST iscsi_tgt_rbd 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:28:07.153 ************************************ 00:28:07.153 05:19:07 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:28:07.153 05:19:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@57 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:28:07.153 05:19:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@59 -- # '[' 1 -eq 1 ']' 00:28:07.153 05:19:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@60 -- # run_test iscsi_tgt_initiator /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:28:07.153 05:19:07 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:07.153 05:19:07 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:07.153 05:19:07 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:28:07.153 ************************************ 00:28:07.153 START TEST iscsi_tgt_initiator 00:28:07.153 ************************************ 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:28:07.153 * Looking for test storage... 00:28:07.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@11 -- # iscsitestinit 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@16 -- # timing_enter start_iscsi_tgt 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@19 -- # pid=133310 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@20 -- # echo 'iSCSI target launched. pid: 133310' 00:28:07.153 iSCSI target launched. pid: 133310 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@21 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@22 -- # waitforlisten 133310 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@829 -- # '[' -z 133310 ']' 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:07.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:07.153 05:19:07 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:07.153 [2024-07-23 05:19:07.253675] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:07.153 [2024-07-23 05:19:07.253772] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133310 ] 00:28:07.422 [2024-07-23 05:19:07.532541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.422 [2024-07-23 05:19:07.605941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@862 -- # return 0 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@24 -- # rpc_cmd framework_start_init 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:08.357 iscsi_tgt is listening. Running tests... 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@25 -- # echo 'iscsi_tgt is listening. Running tests...' 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@27 -- # timing_exit start_iscsi_tgt 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@29 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@30 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:08.357 Malloc0 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@36 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.357 05:19:08 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@37 -- # sleep 1 00:28:09.292 05:19:09 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@38 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:28:09.292 05:19:09 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 5 -s 512 00:28:09.292 05:19:09 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # initiator_json_config 00:28:09.292 05:19:09 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:28:09.551 [2024-07-23 05:19:09.513388] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:09.551 [2024-07-23 05:19:09.513540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133350 ] 00:28:09.810 [2024-07-23 05:19:09.792932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.810 [2024-07-23 05:19:09.866949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.810 Running I/O for 5 seconds... 00:28:15.080 00:28:15.080 Latency(us) 00:28:15.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.080 Job: iSCSI0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:15.080 Verification LBA range: start 0x0 length 0x4000 00:28:15.080 iSCSI0 : 5.00 16966.05 66.27 0.00 0.00 7507.64 1035.17 8638.84 00:28:15.080 =================================================================================================================== 00:28:15.080 Total : 16966.05 66.27 0.00 0.00 7507.64 1035.17 8638.84 00:28:15.080 05:19:15 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # initiator_json_config 00:28:15.080 05:19:15 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 5 -s 512 00:28:15.080 05:19:15 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:28:15.080 [2024-07-23 05:19:15.213529] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:15.080 [2024-07-23 05:19:15.213614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133427 ] 00:28:15.339 [2024-07-23 05:19:15.497105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.597 [2024-07-23 05:19:15.568671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.597 Running I/O for 5 seconds... 00:28:20.866 00:28:20.866 Latency(us) 00:28:20.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.866 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:28:20.866 iSCSI0 : 5.00 38498.91 150.39 0.00 0.00 3320.75 837.82 3738.53 00:28:20.866 =================================================================================================================== 00:28:20.866 Total : 38498.91 150.39 0.00 0.00 3320.75 837.82 3738.53 00:28:20.866 05:19:20 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 5 -s 512 00:28:20.866 05:19:20 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # initiator_json_config 00:28:20.866 05:19:20 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:28:20.866 [2024-07-23 05:19:20.918705] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:20.866 [2024-07-23 05:19:20.918800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133482 ] 00:28:21.124 [2024-07-23 05:19:21.199316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.124 [2024-07-23 05:19:21.267494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.124 Running I/O for 5 seconds... 00:28:26.390 00:28:26.390 Latency(us) 00:28:26.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.390 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:28:26.390 iSCSI0 : 5.00 56050.58 218.95 0.00 0.00 2280.61 662.81 2576.76 00:28:26.390 =================================================================================================================== 00:28:26.390 Total : 56050.58 218.95 0.00 0.00 2280.61 662.81 2576.76 00:28:26.390 05:19:26 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w reset -t 10 -s 512 00:28:26.390 05:19:26 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # initiator_json_config 00:28:26.390 05:19:26 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:28:26.649 [2024-07-23 05:19:26.611872] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:26.649 [2024-07-23 05:19:26.612630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133542 ] 00:28:26.906 [2024-07-23 05:19:26.900068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.906 [2024-07-23 05:19:26.970297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.906 Running I/O for 10 seconds... 00:28:36.896 00:28:36.896 Latency(us) 00:28:36.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.896 Job: iSCSI0 (Core Mask 0x1, workload: reset, depth: 128, IO size: 4096) 00:28:36.896 Verification LBA range: start 0x0 length 0x4000 00:28:36.896 iSCSI0 : 10.00 17289.52 67.54 0.00 0.00 7372.01 848.99 6374.87 00:28:36.896 =================================================================================================================== 00:28:36.896 Total : 17289.52 67.54 0.00 0.00 7372.01 848.99 6374.87 00:28:37.155 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:37.155 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@47 -- # killprocess 133310 00:28:37.155 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@948 -- # '[' -z 133310 ']' 00:28:37.155 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@952 -- # kill -0 133310 00:28:37.155 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@953 -- # uname 00:28:37.155 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:37.155 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133310 00:28:37.155 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:37.155 killing process with pid 133310 00:28:37.155 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:37.155 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133310' 00:28:37.155 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@967 -- # kill 133310 00:28:37.155 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@972 -- # wait 133310 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@49 -- # iscsitestfini 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:28:37.723 00:28:37.723 real 0m30.604s 00:28:37.723 user 0m44.263s 00:28:37.723 sys 0m10.578s 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:37.723 ************************************ 00:28:37.723 END TEST iscsi_tgt_initiator 00:28:37.723 ************************************ 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 05:19:37 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:28:37.723 05:19:37 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@61 -- # run_test iscsi_tgt_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:28:37.723 05:19:37 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:37.723 05:19:37 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:37.723 05:19:37 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:28:37.723 ************************************ 00:28:37.723 START TEST iscsi_tgt_bdev_io_wait 00:28:37.723 ************************************ 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:28:37.723 * Looking for test storage... 00:28:37.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@11 -- # iscsitestinit 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:37.723 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@16 -- # timing_enter start_iscsi_tgt 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@19 -- # pid=133703 00:28:37.724 iSCSI target launched. pid: 133703 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@20 -- # echo 'iSCSI target launched. pid: 133703' 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@21 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@22 -- # waitforlisten 133703 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 133703 ']' 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:37.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:37.724 05:19:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:37.724 [2024-07-23 05:19:37.915235] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:37.724 [2024-07-23 05:19:37.915355] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133703 ] 00:28:38.290 [2024-07-23 05:19:38.202517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.290 [2024-07-23 05:19:38.267820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@25 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@26 -- # rpc_cmd framework_start_init 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.872 05:19:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.872 iscsi_tgt is listening. Running tests... 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@27 -- # echo 'iscsi_tgt is listening. Running tests...' 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@29 -- # timing_exit start_iscsi_tgt 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@31 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@32 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@33 -- # rpc_cmd bdev_malloc_create 64 512 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.872 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:39.152 Malloc0 00:28:39.152 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.152 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@38 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:28:39.152 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.152 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:39.152 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.152 05:19:39 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@39 -- # sleep 1 00:28:40.086 05:19:40 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@40 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:28:40.086 05:19:40 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w write -t 1 00:28:40.086 05:19:40 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # initiator_json_config 00:28:40.086 05:19:40 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:28:40.086 [2024-07-23 05:19:40.183210] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:40.086 [2024-07-23 05:19:40.183328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133742 ] 00:28:40.344 [2024-07-23 05:19:40.324445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.344 [2024-07-23 05:19:40.423585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.344 Running I/O for 1 seconds... 00:28:41.719 00:28:41.720 Latency(us) 00:28:41.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.720 Job: iSCSI0 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:41.720 iSCSI0 : 1.00 28351.70 110.75 0.00 0.00 4503.22 1258.59 5421.61 00:28:41.720 =================================================================================================================== 00:28:41.720 Total : 28351.70 110.75 0.00 0.00 4503.22 1258.59 5421.61 00:28:41.720 05:19:41 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # initiator_json_config 00:28:41.720 05:19:41 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w read -t 1 00:28:41.720 05:19:41 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:28:41.720 [2024-07-23 05:19:41.814635] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:41.720 [2024-07-23 05:19:41.814736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133767 ] 00:28:41.978 [2024-07-23 05:19:41.954661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.978 [2024-07-23 05:19:42.042068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.978 Running I/O for 1 seconds... 00:28:43.354 00:28:43.354 Latency(us) 00:28:43.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.354 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 128, IO size: 4096) 00:28:43.354 iSCSI0 : 1.00 33950.19 132.62 0.00 0.00 3760.67 856.44 4319.42 00:28:43.354 =================================================================================================================== 00:28:43.354 Total : 33950.19 132.62 0.00 0.00 3760.67 856.44 4319.42 00:28:43.354 05:19:43 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # initiator_json_config 00:28:43.354 05:19:43 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 1 00:28:43.354 05:19:43 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:28:43.354 [2024-07-23 05:19:43.442122] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:43.354 [2024-07-23 05:19:43.442236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133784 ] 00:28:43.613 [2024-07-23 05:19:43.582332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.613 [2024-07-23 05:19:43.657108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.613 Running I/O for 1 seconds... 00:28:45.013 00:28:45.013 Latency(us) 00:28:45.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.013 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:28:45.013 iSCSI0 : 1.00 42894.21 167.56 0.00 0.00 2977.79 621.85 3530.01 00:28:45.013 =================================================================================================================== 00:28:45.013 Total : 42894.21 167.56 0.00 0.00 2977.79 621.85 3530.01 00:28:45.013 05:19:44 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # initiator_json_config 00:28:45.013 05:19:44 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 1 00:28:45.013 05:19:44 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:28:45.013 [2024-07-23 05:19:45.052033] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:45.013 [2024-07-23 05:19:45.052148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133804 ] 00:28:45.013 [2024-07-23 05:19:45.185404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.271 [2024-07-23 05:19:45.278216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.271 Running I/O for 1 seconds... 00:28:46.207 00:28:46.207 Latency(us) 00:28:46.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.207 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:28:46.208 iSCSI0 : 1.00 31518.96 123.12 0.00 0.00 4051.49 808.03 5034.36 00:28:46.208 =================================================================================================================== 00:28:46.208 Total : 31518.96 123.12 0.00 0.00 4051.49 808.03 5034.36 00:28:46.466 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:28:46.466 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@49 -- # killprocess 133703 00:28:46.466 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 133703 ']' 00:28:46.466 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 133703 00:28:46.466 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:28:46.466 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:46.466 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133703 00:28:46.466 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:46.466 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:46.466 killing process with pid 133703 00:28:46.466 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133703' 00:28:46.466 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 133703 00:28:46.466 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 133703 00:28:47.033 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@51 -- # iscsitestfini 00:28:47.033 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:28:47.033 00:28:47.033 real 0m9.227s 00:28:47.033 user 0m12.486s 00:28:47.033 sys 0m2.838s 00:28:47.033 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:47.033 05:19:46 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:47.033 ************************************ 00:28:47.033 END TEST iscsi_tgt_bdev_io_wait 00:28:47.033 ************************************ 00:28:47.033 05:19:47 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:28:47.033 05:19:47 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@62 -- # run_test iscsi_tgt_resize /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:28:47.033 05:19:47 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:47.033 05:19:47 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:47.033 05:19:47 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:28:47.033 ************************************ 00:28:47.033 START TEST iscsi_tgt_resize 00:28:47.033 ************************************ 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:28:47.033 * Looking for test storage... 00:28:47.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@12 -- # iscsitestinit 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@14 -- # BDEV_SIZE=64 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@15 -- # BDEV_NEW_SIZE=128 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@16 -- # BLOCK_SIZE=512 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@17 -- # RESIZE_SOCK=/var/tmp/spdk-resize.sock 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@19 -- # timing_enter start_iscsi_tgt 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@22 -- # rm -f /var/tmp/spdk-resize.sock 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@25 -- # pid=133877 00:28:47.033 iSCSI target launched. pid: 133877 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@26 -- # echo 'iSCSI target launched. pid: 133877' 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@27 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@28 -- # waitforlisten 133877 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@829 -- # '[' -z 133877 ']' 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:47.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:47.033 05:19:47 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:47.033 [2024-07-23 05:19:47.173377] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:47.034 [2024-07-23 05:19:47.173470] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133877 ] 00:28:47.292 [2024-07-23 05:19:47.447756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.551 [2024-07-23 05:19:47.513372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@862 -- # return 0 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@29 -- # rpc_cmd framework_start_init 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:28:48.119 iscsi_tgt is listening. Running tests... 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@32 -- # timing_exit start_iscsi_tgt 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@36 -- # rpc_cmd bdev_null_create Null0 64 512 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:48.119 Null0 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@41 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Null0:0 1:2 256 -d 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.119 05:19:48 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@42 -- # sleep 1 00:28:49.493 05:19:49 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@43 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:28:49.493 05:19:49 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@47 -- # bdevperf_pid=133915 00:28:49.493 05:19:49 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@48 -- # waitforlisten 133915 /var/tmp/spdk-resize.sock 00:28:49.493 05:19:49 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@829 -- # '[' -z 133915 ']' 00:28:49.493 05:19:49 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-resize.sock 00:28:49.493 05:19:49 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:49.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock... 00:28:49.493 05:19:49 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock...' 00:28:49.493 05:19:49 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:49.493 05:19:49 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-resize.sock --json /dev/fd/63 -q 16 -o 4096 -w read -t 5 -R -s 128 -z 00:28:49.493 05:19:49 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # initiator_json_config 00:28:49.493 05:19:49 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:49.493 05:19:49 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@139 -- # jq . 00:28:49.493 [2024-07-23 05:19:49.404258] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:49.493 [2024-07-23 05:19:49.404375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 128 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133915 ] 00:28:49.493 [2024-07-23 05:19:49.575167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.493 [2024-07-23 05:19:49.643275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@862 -- # return 0 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@50 -- # rpc_cmd bdev_null_resize Null0 128 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:50.428 [2024-07-23 05:19:50.363837] lun.c: 402:bdev_event_cb: *NOTICE*: bdev name (Null0) received event(SPDK_BDEV_EVENT_RESIZE) 00:28:50.428 true 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # jq '.[].num_blocks' 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # num_block=131072 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@54 -- # total_size=64 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@55 -- # '[' 64 '!=' 64 ']' 00:28:50.428 05:19:50 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@59 -- # sleep 2 00:28:52.357 05:19:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@61 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-resize.sock perform_tests 00:28:52.614 Running I/O for 5 seconds... 00:28:57.888 00:28:57.888 Latency(us) 00:28:57.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.888 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 16, IO size: 4096) 00:28:57.888 iSCSI0 : 5.00 41165.14 160.80 0.00 0.00 385.57 301.61 785.69 00:28:57.888 =================================================================================================================== 00:28:57.888 Total : 41165.14 160.80 0.00 0.00 385.57 301.61 785.69 00:28:57.888 0 00:28:57.888 05:19:57 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:28:57.888 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.888 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:57.888 05:19:57 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # jq '.[].num_blocks' 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # num_block=262144 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@65 -- # total_size=128 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@66 -- # '[' 128 '!=' 128 ']' 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@72 -- # killprocess 133915 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@948 -- # '[' -z 133915 ']' 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@952 -- # kill -0 133915 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # uname 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133915 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:57.889 killing process with pid 133915 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133915' 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@967 -- # kill 133915 00:28:57.889 Received shutdown signal, test time was about 5.000000 seconds 00:28:57.889 00:28:57.889 Latency(us) 00:28:57.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.889 =================================================================================================================== 00:28:57.889 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@972 -- # wait 133915 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@73 -- # killprocess 133877 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@948 -- # '[' -z 133877 ']' 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@952 -- # kill -0 133877 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # uname 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133877 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:57.889 killing process with pid 133877 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133877' 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@967 -- # kill 133877 00:28:57.889 05:19:57 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@972 -- # wait 133877 00:28:58.147 05:19:58 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@75 -- # iscsitestfini 00:28:58.147 05:19:58 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:28:58.147 00:28:58.147 real 0m11.291s 00:28:58.147 user 0m17.141s 00:28:58.148 sys 0m3.060s 00:28:58.148 05:19:58 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:58.148 05:19:58 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:28:58.148 ************************************ 00:28:58.148 END TEST iscsi_tgt_resize 00:28:58.148 ************************************ 00:28:58.148 05:19:58 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:28:58.148 05:19:58 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@65 -- # cleanup_veth_interfaces 00:28:58.148 05:19:58 iscsi_tgt -- iscsi_tgt/common.sh@95 -- # ip link set init_br nomaster 00:28:58.406 05:19:58 iscsi_tgt -- iscsi_tgt/common.sh@96 -- # ip link set tgt_br nomaster 00:28:58.406 05:19:58 iscsi_tgt -- iscsi_tgt/common.sh@97 -- # ip link set tgt_br2 nomaster 00:28:58.406 05:19:58 iscsi_tgt -- iscsi_tgt/common.sh@98 -- # ip link set init_br down 00:28:58.406 05:19:58 iscsi_tgt -- iscsi_tgt/common.sh@99 -- # ip link set tgt_br down 00:28:58.406 05:19:58 iscsi_tgt -- iscsi_tgt/common.sh@100 -- # ip link set tgt_br2 down 00:28:58.406 05:19:58 iscsi_tgt -- iscsi_tgt/common.sh@101 -- # ip link delete iscsi_br type bridge 00:28:58.406 05:19:58 iscsi_tgt -- iscsi_tgt/common.sh@102 -- # ip link delete spdk_init_int 00:28:58.406 05:19:58 iscsi_tgt -- iscsi_tgt/common.sh@103 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:28:58.406 05:19:58 iscsi_tgt -- iscsi_tgt/common.sh@104 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:28:58.406 05:19:58 iscsi_tgt -- iscsi_tgt/common.sh@105 -- # ip netns del spdk_iscsi_ns 00:28:58.406 05:19:58 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:28:58.406 00:28:58.406 real 21m14.332s 00:28:58.406 user 37m52.669s 00:28:58.406 sys 7m22.876s 00:28:58.406 05:19:58 iscsi_tgt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:58.406 05:19:58 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:28:58.406 ************************************ 00:28:58.406 END TEST iscsi_tgt 00:28:58.406 ************************************ 00:28:58.406 05:19:58 -- common/autotest_common.sh@1142 -- # return 0 00:28:58.406 05:19:58 -- spdk/autotest.sh@264 -- # run_test spdkcli_iscsi /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:28:58.406 05:19:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:58.406 05:19:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:58.406 05:19:58 -- common/autotest_common.sh@10 -- # set +x 00:28:58.406 ************************************ 00:28:58.406 START TEST spdkcli_iscsi 00:28:58.406 ************************************ 00:28:58.406 05:19:58 spdkcli_iscsi -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:28:58.664 * Looking for test storage... 00:28:58.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:58.664 05:19:58 spdkcli_iscsi -- spdkcli/iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:58.664 05:19:58 spdkcli_iscsi -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:58.664 05:19:58 spdkcli_iscsi -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:58.665 05:19:58 spdkcli_iscsi -- spdkcli/iscsi.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:28:58.665 05:19:58 spdkcli_iscsi -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:28:58.665 05:19:58 spdkcli_iscsi -- spdkcli/iscsi.sh@12 -- # MATCH_FILE=spdkcli_iscsi.test 00:28:58.665 05:19:58 spdkcli_iscsi -- spdkcli/iscsi.sh@13 -- # SPDKCLI_BRANCH=/iscsi 00:28:58.665 05:19:58 spdkcli_iscsi -- spdkcli/iscsi.sh@15 -- # trap cleanup EXIT 00:28:58.665 05:19:58 spdkcli_iscsi -- spdkcli/iscsi.sh@17 -- # timing_enter run_iscsi_tgt 00:28:58.665 05:19:58 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:58.665 05:19:58 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:28:58.665 05:19:58 spdkcli_iscsi -- spdkcli/iscsi.sh@21 -- # iscsi_tgt_pid=134122 00:28:58.665 05:19:58 spdkcli_iscsi -- spdkcli/iscsi.sh@22 -- # waitforlisten 134122 00:28:58.665 05:19:58 spdkcli_iscsi -- common/autotest_common.sh@829 -- # '[' -z 134122 ']' 00:28:58.665 05:19:58 spdkcli_iscsi -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.665 05:19:58 spdkcli_iscsi -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.665 05:19:58 spdkcli_iscsi -- spdkcli/iscsi.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x3 -p 0 --wait-for-rpc 00:28:58.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.665 05:19:58 spdkcli_iscsi -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.665 05:19:58 spdkcli_iscsi -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.665 05:19:58 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:28:58.665 [2024-07-23 05:19:58.728796] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:28:58.665 [2024-07-23 05:19:58.728891] [ DPDK EAL parameters: iscsi --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134122 ] 00:28:58.665 [2024-07-23 05:19:58.867471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:58.923 [2024-07-23 05:19:58.962701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.923 [2024-07-23 05:19:58.962712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.488 05:19:59 spdkcli_iscsi -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.488 05:19:59 spdkcli_iscsi -- common/autotest_common.sh@862 -- # return 0 00:28:59.488 05:19:59 spdkcli_iscsi -- spdkcli/iscsi.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:29:00.055 05:20:00 spdkcli_iscsi -- spdkcli/iscsi.sh@25 -- # timing_exit run_iscsi_tgt 00:29:00.055 05:20:00 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:00.055 05:20:00 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:29:00.055 05:20:00 spdkcli_iscsi -- spdkcli/iscsi.sh@27 -- # timing_enter spdkcli_create_iscsi_config 00:29:00.055 05:20:00 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:00.055 05:20:00 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:29:00.055 05:20:00 spdkcli_iscsi -- spdkcli/iscsi.sh@48 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc0'\'' '\''Malloc0'\'' True 00:29:00.055 '\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:00.055 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:00.055 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:00.055 '\''/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"'\'' '\''host=127.0.0.1, port=3261'\'' True 00:29:00.055 '\''/iscsi/portal_groups create 2 127.0.0.1:3262'\'' '\''host=127.0.0.1, port=3262'\'' True 00:29:00.055 '\''/iscsi/initiator_groups create 2 ANY 10.0.2.15/32'\'' '\''hostname=ANY, netmask=10.0.2.15/32'\'' True 00:29:00.055 '\''/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32'\'' '\''hostname=ANZ, netmask=10.0.2.15/32'\'' True 00:29:00.055 '\''/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32'\'' '\''hostname=ANW, netmask=10.0.2.16'\'' True 00:29:00.055 '\''/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1'\'' '\''Target0'\'' True 00:29:00.055 '\''/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1'\'' '\''Target1'\'' True 00:29:00.055 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' True 00:29:00.055 '\''/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2'\'' '\''Malloc3'\'' True 00:29:00.055 '\''/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"'\'' '\''user=test3'\'' True 00:29:00.055 '\''/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2'\'' '\''user=test2'\'' True 00:29:00.055 '\''/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"'\'' '\''user=test4'\'' True 00:29:00.055 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true'\'' '\''disable_chap: True'\'' True 00:29:00.055 '\''/iscsi/global_params set_auth g=1 d=true r=false'\'' '\''disable_chap: True'\'' True 00:29:00.055 '\''/iscsi ls'\'' '\''Malloc'\'' True 00:29:00.055 ' 00:29:08.276 Executing command: ['/bdevs/malloc create 32 512 Malloc0', 'Malloc0', True] 00:29:08.276 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:08.276 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:08.276 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:08.276 Executing command: ['/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"', 'host=127.0.0.1, port=3261', True] 00:29:08.276 Executing command: ['/iscsi/portal_groups create 2 127.0.0.1:3262', 'host=127.0.0.1, port=3262', True] 00:29:08.276 Executing command: ['/iscsi/initiator_groups create 2 ANY 10.0.2.15/32', 'hostname=ANY, netmask=10.0.2.15/32', True] 00:29:08.276 Executing command: ['/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32', 'hostname=ANZ, netmask=10.0.2.15/32', True] 00:29:08.276 Executing command: ['/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32', 'hostname=ANW, netmask=10.0.2.16', True] 00:29:08.276 Executing command: ['/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1', 'Target0', True] 00:29:08.276 Executing command: ['/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1', 'Target1', True] 00:29:08.276 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', True] 00:29:08.276 Executing command: ['/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2', 'Malloc3', True] 00:29:08.276 Executing command: ['/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"', 'user=test3', True] 00:29:08.276 Executing command: ['/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2', 'user=test2', True] 00:29:08.276 Executing command: ['/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"', 'user=test4', True] 00:29:08.276 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true', 'disable_chap: True', True] 00:29:08.276 Executing command: ['/iscsi/global_params set_auth g=1 d=true r=false', 'disable_chap: True', True] 00:29:08.276 Executing command: ['/iscsi ls', 'Malloc', True] 00:29:08.276 05:20:07 spdkcli_iscsi -- spdkcli/iscsi.sh@49 -- # timing_exit spdkcli_create_iscsi_config 00:29:08.276 05:20:07 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:08.276 05:20:07 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:29:08.276 05:20:07 spdkcli_iscsi -- spdkcli/iscsi.sh@51 -- # timing_enter spdkcli_check_match 00:29:08.276 05:20:07 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:08.276 05:20:07 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:29:08.276 05:20:07 spdkcli_iscsi -- spdkcli/iscsi.sh@52 -- # check_match 00:29:08.276 05:20:07 spdkcli_iscsi -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /iscsi 00:29:08.276 05:20:07 spdkcli_iscsi -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test.match 00:29:08.276 05:20:08 spdkcli_iscsi -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test 00:29:08.276 05:20:08 spdkcli_iscsi -- spdkcli/iscsi.sh@53 -- # timing_exit spdkcli_check_match 00:29:08.276 05:20:08 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:08.276 05:20:08 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:29:08.276 05:20:08 spdkcli_iscsi -- spdkcli/iscsi.sh@55 -- # timing_enter spdkcli_clear_iscsi_config 00:29:08.276 05:20:08 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:08.276 05:20:08 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:29:08.276 05:20:08 spdkcli_iscsi -- spdkcli/iscsi.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/iscsi/auth_groups delete_secret 1 test2'\'' '\''user=test2'\'' 00:29:08.276 '\''/iscsi/auth_groups delete_secret_all 1'\'' '\''user=test1'\'' 00:29:08.276 '\''/iscsi/auth_groups delete 1'\'' '\''user=test1'\'' 00:29:08.276 '\''/iscsi/auth_groups delete_all'\'' '\''user=test4'\'' 00:29:08.276 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' 00:29:08.276 '\''/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1'\'' '\''Target1'\'' 00:29:08.276 '\''/iscsi/target_nodes delete_all'\'' '\''Target0'\'' 00:29:08.276 '\''/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32'\'' '\''ANW'\'' 00:29:08.276 '\''/iscsi/initiator_groups delete 3'\'' '\''ANZ'\'' 00:29:08.276 '\''/iscsi/initiator_groups delete_all'\'' '\''ANY'\'' 00:29:08.276 '\''/iscsi/portal_groups delete 1'\'' '\''127.0.0.1:3261'\'' 00:29:08.276 '\''/iscsi/portal_groups delete_all'\'' '\''127.0.0.1:3262'\'' 00:29:08.276 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:08.276 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:08.276 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:08.276 '\''/bdevs/malloc delete Malloc0'\'' '\''Malloc0'\'' 00:29:08.276 ' 00:29:14.840 Executing command: ['/iscsi/auth_groups delete_secret 1 test2', 'user=test2', False] 00:29:14.840 Executing command: ['/iscsi/auth_groups delete_secret_all 1', 'user=test1', False] 00:29:14.840 Executing command: ['/iscsi/auth_groups delete 1', 'user=test1', False] 00:29:14.840 Executing command: ['/iscsi/auth_groups delete_all', 'user=test4', False] 00:29:14.840 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', False] 00:29:14.840 Executing command: ['/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1', 'Target1', False] 00:29:14.840 Executing command: ['/iscsi/target_nodes delete_all', 'Target0', False] 00:29:14.840 Executing command: ['/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32', 'ANW', False] 00:29:14.840 Executing command: ['/iscsi/initiator_groups delete 3', 'ANZ', False] 00:29:14.840 Executing command: ['/iscsi/initiator_groups delete_all', 'ANY', False] 00:29:14.840 Executing command: ['/iscsi/portal_groups delete 1', '127.0.0.1:3261', False] 00:29:14.840 Executing command: ['/iscsi/portal_groups delete_all', '127.0.0.1:3262', False] 00:29:14.840 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:14.840 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:14.840 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:14.840 Executing command: ['/bdevs/malloc delete Malloc0', 'Malloc0', False] 00:29:14.840 05:20:14 spdkcli_iscsi -- spdkcli/iscsi.sh@73 -- # timing_exit spdkcli_clear_iscsi_config 00:29:14.840 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:14.840 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:29:14.840 05:20:14 spdkcli_iscsi -- spdkcli/iscsi.sh@75 -- # killprocess 134122 00:29:14.840 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 134122 ']' 00:29:14.840 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 134122 00:29:14.840 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@953 -- # uname 00:29:14.840 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:14.840 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134122 00:29:14.840 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:14.840 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:14.840 killing process with pid 134122 00:29:14.840 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134122' 00:29:14.840 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@967 -- # kill 134122 00:29:14.840 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@972 -- # wait 134122 00:29:14.840 05:20:14 spdkcli_iscsi -- spdkcli/iscsi.sh@1 -- # cleanup 00:29:14.840 05:20:14 spdkcli_iscsi -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:14.841 05:20:14 spdkcli_iscsi -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:29:14.841 05:20:14 spdkcli_iscsi -- spdkcli/common.sh@16 -- # '[' -n 134122 ']' 00:29:14.841 05:20:14 spdkcli_iscsi -- spdkcli/common.sh@17 -- # killprocess 134122 00:29:14.841 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 134122 ']' 00:29:14.841 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 134122 00:29:14.841 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (134122) - No such process 00:29:14.841 Process with pid 134122 is not found 00:29:14.841 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@975 -- # echo 'Process with pid 134122 is not found' 00:29:14.841 05:20:14 spdkcli_iscsi -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:14.841 05:20:14 spdkcli_iscsi -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_iscsi.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:14.841 00:29:14.841 real 0m16.303s 00:29:14.841 user 0m34.792s 00:29:14.841 sys 0m1.103s 00:29:14.841 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:14.841 ************************************ 00:29:14.841 END TEST spdkcli_iscsi 00:29:14.841 ************************************ 00:29:14.841 05:20:14 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:29:14.841 05:20:14 -- common/autotest_common.sh@1142 -- # return 0 00:29:14.841 05:20:14 -- spdk/autotest.sh@267 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:29:14.841 05:20:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:14.841 05:20:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.841 05:20:14 -- common/autotest_common.sh@10 -- # set +x 00:29:14.841 ************************************ 00:29:14.841 START TEST spdkcli_raid 00:29:14.841 ************************************ 00:29:14.841 05:20:14 spdkcli_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:29:14.841 * Looking for test storage... 00:29:14.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:29:14.841 05:20:14 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:29:14.841 05:20:14 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:14.841 05:20:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:14.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.841 05:20:14 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:29:14.841 05:20:15 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=134414 00:29:14.841 05:20:15 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 134414 00:29:14.841 05:20:15 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:29:14.841 05:20:15 spdkcli_raid -- common/autotest_common.sh@829 -- # '[' -z 134414 ']' 00:29:14.841 05:20:15 spdkcli_raid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.841 05:20:15 spdkcli_raid -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.841 05:20:15 spdkcli_raid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.841 05:20:15 spdkcli_raid -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.841 05:20:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:15.099 [2024-07-23 05:20:15.095262] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:29:15.099 [2024-07-23 05:20:15.095389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134414 ] 00:29:15.099 [2024-07-23 05:20:15.230234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:15.358 [2024-07-23 05:20:15.322007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.358 [2024-07-23 05:20:15.322015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.924 05:20:16 spdkcli_raid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.924 05:20:16 spdkcli_raid -- common/autotest_common.sh@862 -- # return 0 00:29:15.924 05:20:16 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:29:15.924 05:20:16 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:15.924 05:20:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:15.924 05:20:16 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:29:15.924 05:20:16 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:15.924 05:20:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:15.924 05:20:16 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:15.924 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:15.924 ' 00:29:17.827 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:29:17.827 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:29:17.827 05:20:17 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:29:17.827 05:20:17 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:17.827 05:20:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:17.827 05:20:17 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:29:17.827 05:20:17 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:17.827 05:20:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:17.827 05:20:17 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:29:17.827 ' 00:29:18.762 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:29:18.762 05:20:18 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:29:18.762 05:20:18 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:18.762 05:20:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:18.762 05:20:18 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:29:18.762 05:20:18 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:18.762 05:20:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:18.762 05:20:18 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:29:18.762 05:20:18 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:29:19.330 05:20:19 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:29:19.330 05:20:19 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:29:19.330 05:20:19 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:29:19.330 05:20:19 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:19.330 05:20:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:19.330 05:20:19 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:29:19.330 05:20:19 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:19.330 05:20:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:19.330 05:20:19 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:29:19.330 ' 00:29:20.269 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:29:20.527 05:20:20 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:29:20.527 05:20:20 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:20.527 05:20:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:20.527 05:20:20 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:29:20.527 05:20:20 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:20.527 05:20:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:20.527 05:20:20 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:29:20.527 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:29:20.527 ' 00:29:21.900 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:29:21.900 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:29:21.900 05:20:21 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:29:21.900 05:20:21 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:21.900 05:20:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:21.900 05:20:21 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 134414 00:29:21.900 05:20:21 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 134414 ']' 00:29:21.900 05:20:22 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 134414 00:29:21.900 05:20:22 spdkcli_raid -- common/autotest_common.sh@953 -- # uname 00:29:21.900 05:20:22 spdkcli_raid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:21.900 05:20:22 spdkcli_raid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134414 00:29:21.900 05:20:22 spdkcli_raid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:21.900 05:20:22 spdkcli_raid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:21.900 05:20:22 spdkcli_raid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134414' 00:29:21.900 killing process with pid 134414 00:29:21.900 05:20:22 spdkcli_raid -- common/autotest_common.sh@967 -- # kill 134414 00:29:21.900 05:20:22 spdkcli_raid -- common/autotest_common.sh@972 -- # wait 134414 00:29:22.465 05:20:22 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:29:22.465 05:20:22 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 134414 ']' 00:29:22.465 05:20:22 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 134414 00:29:22.465 05:20:22 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 134414 ']' 00:29:22.465 05:20:22 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 134414 00:29:22.465 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (134414) - No such process 00:29:22.465 Process with pid 134414 is not found 00:29:22.465 05:20:22 spdkcli_raid -- common/autotest_common.sh@975 -- # echo 'Process with pid 134414 is not found' 00:29:22.465 05:20:22 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:29:22.465 05:20:22 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:22.465 05:20:22 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:22.465 05:20:22 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:22.465 ************************************ 00:29:22.465 END TEST spdkcli_raid 00:29:22.465 ************************************ 00:29:22.465 00:29:22.465 real 0m7.490s 00:29:22.465 user 0m16.119s 00:29:22.465 sys 0m0.844s 00:29:22.465 05:20:22 spdkcli_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:22.465 05:20:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:29:22.465 05:20:22 -- common/autotest_common.sh@1142 -- # return 0 00:29:22.465 05:20:22 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:29:22.465 05:20:22 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:29:22.465 05:20:22 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:22.465 05:20:22 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:22.465 05:20:22 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:22.465 05:20:22 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:29:22.465 05:20:22 -- spdk/autotest.sh@330 -- # '[' 1 -eq 1 ']' 00:29:22.465 05:20:22 -- spdk/autotest.sh@331 -- # run_test blockdev_rbd /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:29:22.465 05:20:22 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:22.465 05:20:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.465 05:20:22 -- common/autotest_common.sh@10 -- # set +x 00:29:22.465 ************************************ 00:29:22.465 START TEST blockdev_rbd 00:29:22.465 ************************************ 00:29:22.465 05:20:22 blockdev_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:29:22.465 * Looking for test storage... 00:29:22.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:22.465 05:20:22 blockdev_rbd -- bdev/nbd_common.sh@6 -- # set -e 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@20 -- # : 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@673 -- # uname -s 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@681 -- # test_type=rbd 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@682 -- # crypto_device= 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@683 -- # dek= 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@684 -- # env_ctx= 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == bdev ]] 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == crypto_* ]] 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=134654 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:22.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@49 -- # waitforlisten 134654 00:29:22.465 05:20:22 blockdev_rbd -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:22.465 05:20:22 blockdev_rbd -- common/autotest_common.sh@829 -- # '[' -z 134654 ']' 00:29:22.465 05:20:22 blockdev_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.465 05:20:22 blockdev_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:22.465 05:20:22 blockdev_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.465 05:20:22 blockdev_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:22.465 05:20:22 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:22.465 [2024-07-23 05:20:22.601582] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:29:22.465 [2024-07-23 05:20:22.601673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134654 ] 00:29:22.722 [2024-07-23 05:20:22.735290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.722 [2024-07-23 05:20:22.847666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@862 -- # return 0 00:29:23.689 05:20:23 blockdev_rbd -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:29:23.689 05:20:23 blockdev_rbd -- bdev/blockdev.sh@719 -- # setup_rbd_conf 00:29:23.689 05:20:23 blockdev_rbd -- bdev/blockdev.sh@260 -- # timing_enter rbd_setup 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:23.689 05:20:23 blockdev_rbd -- bdev/blockdev.sh@261 -- # rbd_setup 127.0.0.1 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@1005 -- # '[' -z 127.0.0.1 ']' 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@1009 -- # '[' -n '' ']' 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:29:23.689 05:20:23 blockdev_rbd -- common/autotest_common.sh@1022 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:29:23.689 + base_dir=/var/tmp/ceph 00:29:23.689 + image=/var/tmp/ceph/ceph_raw.img 00:29:23.689 + dev=/dev/loop200 00:29:23.689 + pkill -9 ceph 00:29:23.689 + sleep 3 00:29:26.967 + umount /dev/loop200p2 00:29:26.967 umount: /dev/loop200p2: no mount point specified. 00:29:26.967 + losetup -d /dev/loop200 00:29:26.967 losetup: /dev/loop200: detach failed: No such device or address 00:29:26.967 + rm -rf /var/tmp/ceph 00:29:26.967 05:20:26 blockdev_rbd -- common/autotest_common.sh@1023 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:29:26.967 + set -e 00:29:26.967 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:29:26.967 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:29:26.967 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:29:26.967 + base_dir=/var/tmp/ceph 00:29:26.967 + mon_ip=127.0.0.1 00:29:26.967 + mon_dir=/var/tmp/ceph/mon.a 00:29:26.967 + pid_dir=/var/tmp/ceph/pid 00:29:26.967 + ceph_conf=/var/tmp/ceph/ceph.conf 00:29:26.967 + mnt_dir=/var/tmp/ceph/mnt 00:29:26.967 + image=/var/tmp/ceph_raw.img 00:29:26.967 + dev=/dev/loop200 00:29:26.967 + modprobe loop 00:29:26.967 + umount /dev/loop200p2 00:29:26.967 umount: /dev/loop200p2: no mount point specified. 00:29:26.967 + true 00:29:26.967 + losetup -d /dev/loop200 00:29:26.967 losetup: /dev/loop200: detach failed: No such device or address 00:29:26.967 + true 00:29:26.967 + '[' -d /var/tmp/ceph ']' 00:29:26.967 + mkdir /var/tmp/ceph 00:29:26.967 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:29:26.967 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:29:26.967 + fallocate -l 4G /var/tmp/ceph_raw.img 00:29:26.967 + mknod /dev/loop200 b 7 200 00:29:26.967 mknod: /dev/loop200: File exists 00:29:26.967 + true 00:29:26.967 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:29:26.967 Partitioning /dev/loop200 00:29:26.967 + PARTED='parted -s' 00:29:26.967 + SGDISK=sgdisk 00:29:26.967 + echo 'Partitioning /dev/loop200' 00:29:26.967 + parted -s /dev/loop200 mktable gpt 00:29:26.967 + sleep 2 00:29:28.863 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:29:28.863 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:29:28.863 Setting name on /dev/loop200 00:29:28.863 + partno=0 00:29:28.863 + echo 'Setting name on /dev/loop200' 00:29:28.863 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:29:29.916 Warning: The kernel is still using the old partition table. 00:29:29.916 The new table will be used at the next reboot or after you 00:29:29.916 run partprobe(8) or kpartx(8) 00:29:29.916 The operation has completed successfully. 00:29:29.916 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:29:30.849 Warning: The kernel is still using the old partition table. 00:29:30.849 The new table will be used at the next reboot or after you 00:29:30.849 run partprobe(8) or kpartx(8) 00:29:30.849 The operation has completed successfully. 00:29:30.849 + kpartx /dev/loop200 00:29:30.849 loop200p1 : 0 4192256 /dev/loop200 2048 00:29:30.849 loop200p2 : 0 4192256 /dev/loop200 4194304 00:29:30.849 ++ ceph -v 00:29:30.849 ++ awk '{print $3}' 00:29:30.849 + ceph_version=17.2.7 00:29:30.849 + ceph_maj=17 00:29:30.849 + '[' 17 -gt 12 ']' 00:29:30.849 + update_config=true 00:29:30.849 + rm -f /var/log/ceph/ceph-mon.a.log 00:29:30.849 + set_min_mon_release='--set-min-mon-release 14' 00:29:30.849 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:29:30.849 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:29:30.849 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:29:30.849 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:29:30.849 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:29:30.849 = sectsz=512 attr=2, projid32bit=1 00:29:30.849 = crc=1 finobt=1, sparse=1, rmapbt=0 00:29:30.849 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:29:30.849 data = bsize=4096 blocks=524032, imaxpct=25 00:29:30.849 = sunit=0 swidth=0 blks 00:29:30.849 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:29:30.849 log =internal log bsize=4096 blocks=16384, version=2 00:29:30.849 = sectsz=512 sunit=0 blks, lazy-count=1 00:29:30.849 realtime =none extsz=4096 blocks=0, rtextents=0 00:29:30.849 Discarding blocks...Done. 00:29:30.849 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:29:30.849 + cat 00:29:30.849 + rm -rf '/var/tmp/ceph/mon.a/*' 00:29:30.849 + mkdir -p /var/tmp/ceph/mon.a 00:29:30.849 + mkdir -p /var/tmp/ceph/pid 00:29:30.849 + rm -f /etc/ceph/ceph.client.admin.keyring 00:29:30.849 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:29:30.849 creating /var/tmp/ceph/keyring 00:29:30.849 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:29:31.107 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:29:31.107 monmaptool: monmap file /var/tmp/ceph/monmap 00:29:31.107 monmaptool: generated fsid 2936d067-1b5e-4009-93b4-16a02b00d0f5 00:29:31.107 setting min_mon_release = octopus 00:29:31.107 epoch 0 00:29:31.107 fsid 2936d067-1b5e-4009-93b4-16a02b00d0f5 00:29:31.107 last_changed 2024-07-23T05:20:31.108873+0000 00:29:31.107 created 2024-07-23T05:20:31.108873+0000 00:29:31.107 min_mon_release 15 (octopus) 00:29:31.107 election_strategy: 1 00:29:31.107 0: v2:127.0.0.1:12046/0 mon.a 00:29:31.107 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:29:31.107 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:29:31.107 + '[' true = true ']' 00:29:31.107 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:29:31.107 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:29:31.107 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:29:31.107 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:29:31.107 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:29:31.107 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:29:31.107 ++ hostname 00:29:31.107 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:29:31.107 + true 00:29:31.107 + '[' true = true ']' 00:29:31.107 + ceph-conf --name mon.a --show-config-value log_file 00:29:31.107 /var/log/ceph/ceph-mon.a.log 00:29:31.107 ++ ceph -s 00:29:31.107 ++ grep id 00:29:31.107 ++ awk '{print $2}' 00:29:31.365 + fsid=2936d067-1b5e-4009-93b4-16a02b00d0f5 00:29:31.365 + sed -i 's/perf = true/perf = true\n\tfsid = 2936d067-1b5e-4009-93b4-16a02b00d0f5 \n/g' /var/tmp/ceph/ceph.conf 00:29:31.365 + (( ceph_maj < 18 )) 00:29:31.365 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:29:31.365 + cat /var/tmp/ceph/ceph.conf 00:29:31.365 [global] 00:29:31.365 debug_lockdep = 0/0 00:29:31.365 debug_context = 0/0 00:29:31.365 debug_crush = 0/0 00:29:31.365 debug_buffer = 0/0 00:29:31.365 debug_timer = 0/0 00:29:31.365 debug_filer = 0/0 00:29:31.365 debug_objecter = 0/0 00:29:31.365 debug_rados = 0/0 00:29:31.365 debug_rbd = 0/0 00:29:31.365 debug_ms = 0/0 00:29:31.365 debug_monc = 0/0 00:29:31.365 debug_tp = 0/0 00:29:31.365 debug_auth = 0/0 00:29:31.365 debug_finisher = 0/0 00:29:31.365 debug_heartbeatmap = 0/0 00:29:31.365 debug_perfcounter = 0/0 00:29:31.365 debug_asok = 0/0 00:29:31.365 debug_throttle = 0/0 00:29:31.365 debug_mon = 0/0 00:29:31.365 debug_paxos = 0/0 00:29:31.365 debug_rgw = 0/0 00:29:31.365 00:29:31.365 perf = true 00:29:31.365 osd objectstore = filestore 00:29:31.365 00:29:31.365 fsid = 2936d067-1b5e-4009-93b4-16a02b00d0f5 00:29:31.365 00:29:31.365 mutex_perf_counter = false 00:29:31.365 throttler_perf_counter = false 00:29:31.365 rbd cache = false 00:29:31.365 mon_allow_pool_delete = true 00:29:31.365 00:29:31.365 osd_pool_default_size = 1 00:29:31.365 00:29:31.365 [mon] 00:29:31.365 mon_max_pool_pg_num=166496 00:29:31.365 mon_osd_max_split_count = 10000 00:29:31.365 mon_pg_warn_max_per_osd = 10000 00:29:31.365 00:29:31.365 [osd] 00:29:31.365 osd_op_threads = 64 00:29:31.365 filestore_queue_max_ops=5000 00:29:31.365 filestore_queue_committing_max_ops=5000 00:29:31.365 journal_max_write_entries=1000 00:29:31.365 journal_queue_max_ops=3000 00:29:31.365 objecter_inflight_ops=102400 00:29:31.365 filestore_wbthrottle_enable=false 00:29:31.365 filestore_queue_max_bytes=1048576000 00:29:31.365 filestore_queue_committing_max_bytes=1048576000 00:29:31.365 journal_max_write_bytes=1048576000 00:29:31.365 journal_queue_max_bytes=1048576000 00:29:31.365 ms_dispatch_throttle_bytes=1048576000 00:29:31.365 objecter_inflight_op_bytes=1048576000 00:29:31.365 filestore_max_sync_interval=10 00:29:31.365 osd_client_message_size_cap = 0 00:29:31.365 osd_client_message_cap = 0 00:29:31.365 osd_enable_op_tracker = false 00:29:31.365 filestore_fd_cache_size = 10240 00:29:31.365 filestore_fd_cache_shards = 64 00:29:31.365 filestore_op_threads = 16 00:29:31.365 osd_op_num_shards = 48 00:29:31.365 osd_op_num_threads_per_shard = 2 00:29:31.365 osd_pg_object_context_cache_count = 10240 00:29:31.365 filestore_odsync_write = True 00:29:31.365 journal_dynamic_throttle = True 00:29:31.365 00:29:31.365 [osd.0] 00:29:31.365 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:29:31.365 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:29:31.365 00:29:31.365 # add mon address 00:29:31.365 [mon.a] 00:29:31.365 mon addr = v2:127.0.0.1:12046 00:29:31.365 + i=0 00:29:31.365 + mkdir -p /var/tmp/ceph/mnt 00:29:31.366 ++ uuidgen 00:29:31.366 + uuid=e0cacd97-ebfd-43aa-9192-b73943bfbb10 00:29:31.366 + ceph -c /var/tmp/ceph/ceph.conf osd create e0cacd97-ebfd-43aa-9192-b73943bfbb10 0 00:29:31.623 0 00:29:31.882 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid e0cacd97-ebfd-43aa-9192-b73943bfbb10 --check-needs-journal --no-mon-config 00:29:31.882 2024-07-23T05:20:31.884+0000 7f1c6a065400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:29:31.882 2024-07-23T05:20:31.884+0000 7f1c6a065400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:29:31.882 2024-07-23T05:20:31.936+0000 7f1c6a065400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected e0cacd97-ebfd-43aa-9192-b73943bfbb10, invalid (someone else's?) journal 00:29:31.882 2024-07-23T05:20:31.972+0000 7f1c6a065400 -1 journal do_read_entry(4096): bad header magic 00:29:31.882 2024-07-23T05:20:31.972+0000 7f1c6a065400 -1 journal do_read_entry(4096): bad header magic 00:29:31.882 ++ hostname 00:29:31.882 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:29:33.266 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:29:33.266 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:29:33.526 added key for osd.0 00:29:33.526 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:29:33.785 + class_dir=/lib64/rados-classes 00:29:33.785 + [[ -e /lib64/rados-classes ]] 00:29:33.785 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:29:34.043 + pkill -9 ceph-osd 00:29:34.304 + true 00:29:34.304 + sleep 2 00:29:36.207 + mkdir -p /var/tmp/ceph/pid 00:29:36.207 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:29:36.207 2024-07-23T05:20:36.325+0000 7f52e186c400 -1 Falling back to public interface 00:29:36.207 2024-07-23T05:20:36.377+0000 7f52e186c400 -1 journal do_read_entry(8192): bad header magic 00:29:36.207 2024-07-23T05:20:36.377+0000 7f52e186c400 -1 journal do_read_entry(8192): bad header magic 00:29:36.207 2024-07-23T05:20:36.386+0000 7f52e186c400 -1 osd.0 0 log_to_monitors true 00:29:37.583 05:20:37 blockdev_rbd -- common/autotest_common.sh@1025 -- # ceph osd pool create rbd 128 00:29:38.520 pool 'rbd' created 00:29:38.520 05:20:38 blockdev_rbd -- common/autotest_common.sh@1026 -- # rbd create foo --size 1000 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@262 -- # timing_exit rbd_setup 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@264 -- # rpc_cmd bdev_rbd_create -b Ceph0 rbd foo 512 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:42.714 [2024-07-23 05:20:42.527773] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:29:42.714 WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster. 00:29:42.714 Ceph0 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@739 -- # cat 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "e1cee80a-82c1-479c-bba1-c45225badaf2"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "e1cee80a-82c1-479c-bba1-c45225badaf2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@748 -- # jq -r .name 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@751 -- # hello_world_bdev=Ceph0 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:29:42.714 05:20:42 blockdev_rbd -- bdev/blockdev.sh@753 -- # killprocess 134654 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@948 -- # '[' -z 134654 ']' 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@952 -- # kill -0 134654 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@953 -- # uname 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134654 00:29:42.714 killing process with pid 134654 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134654' 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@967 -- # kill 134654 00:29:42.714 05:20:42 blockdev_rbd -- common/autotest_common.sh@972 -- # wait 134654 00:29:42.973 05:20:43 blockdev_rbd -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:42.973 05:20:43 blockdev_rbd -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:29:42.973 05:20:43 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:42.973 05:20:43 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:42.973 05:20:43 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:42.973 ************************************ 00:29:42.973 START TEST bdev_hello_world 00:29:42.973 ************************************ 00:29:42.973 05:20:43 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:29:42.973 [2024-07-23 05:20:43.169609] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:29:42.974 [2024-07-23 05:20:43.169696] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135518 ] 00:29:43.249 [2024-07-23 05:20:43.302147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.249 [2024-07-23 05:20:43.394477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.511 [2024-07-23 05:20:43.566910] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:29:43.511 [2024-07-23 05:20:43.577621] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:43.511 [2024-07-23 05:20:43.577658] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Ceph0 00:29:43.511 [2024-07-23 05:20:43.577673] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:43.511 [2024-07-23 05:20:43.578941] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:43.511 [2024-07-23 05:20:43.590768] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:43.511 [2024-07-23 05:20:43.590803] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:43.511 [2024-07-23 05:20:43.595261] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:43.511 00:29:43.511 [2024-07-23 05:20:43.595291] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:43.770 00:29:43.770 real 0m0.708s 00:29:43.770 user 0m0.434s 00:29:43.770 sys 0m0.158s 00:29:43.770 ************************************ 00:29:43.770 END TEST bdev_hello_world 00:29:43.770 ************************************ 00:29:43.770 05:20:43 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:43.770 05:20:43 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:43.770 05:20:43 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:29:43.770 05:20:43 blockdev_rbd -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:29:43.770 05:20:43 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:43.770 05:20:43 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.770 05:20:43 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:43.770 ************************************ 00:29:43.770 START TEST bdev_bounds 00:29:43.770 ************************************ 00:29:43.770 05:20:43 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:29:43.770 05:20:43 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=135563 00:29:43.770 05:20:43 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:43.770 Process bdevio pid: 135563 00:29:43.770 05:20:43 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 135563' 00:29:43.770 05:20:43 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:43.770 05:20:43 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 135563 00:29:43.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.770 05:20:43 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 135563 ']' 00:29:43.770 05:20:43 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.770 05:20:43 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:43.770 05:20:43 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.770 05:20:43 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:43.770 05:20:43 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:43.770 [2024-07-23 05:20:43.955043] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:29:43.770 [2024-07-23 05:20:43.955184] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135563 ] 00:29:44.028 [2024-07-23 05:20:44.101400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:44.028 [2024-07-23 05:20:44.193334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.028 [2024-07-23 05:20:44.193443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:44.028 [2024-07-23 05:20:44.193453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.286 [2024-07-23 05:20:44.373692] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:29:44.852 05:20:44 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:44.852 05:20:44 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:29:44.852 05:20:44 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:44.852 I/O targets: 00:29:44.852 Ceph0: 2048000 blocks of 512 bytes (1000 MiB) 00:29:44.852 00:29:44.852 00:29:44.852 CUnit - A unit testing framework for C - Version 2.1-3 00:29:44.852 http://cunit.sourceforge.net/ 00:29:44.852 00:29:44.852 00:29:44.852 Suite: bdevio tests on: Ceph0 00:29:44.852 Test: blockdev write read block ...passed 00:29:44.852 Test: blockdev write zeroes read block ...passed 00:29:44.852 Test: blockdev write zeroes read no split ...passed 00:29:44.852 Test: blockdev write zeroes read split ...passed 00:29:44.852 Test: blockdev write zeroes read split partial ...passed 00:29:44.852 Test: blockdev reset ...passed 00:29:44.852 Test: blockdev write read 8 blocks ...passed 00:29:44.852 Test: blockdev write read size > 128k ...passed 00:29:44.852 Test: blockdev write read invalid size ...passed 00:29:44.852 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:44.852 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:44.852 Test: blockdev write read max offset ...passed 00:29:45.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:45.111 Test: blockdev writev readv 8 blocks ...passed 00:29:45.111 Test: blockdev writev readv 30 x 1block ...passed 00:29:45.111 Test: blockdev writev readv block ...passed 00:29:45.111 Test: blockdev writev readv size > 128k ...passed 00:29:45.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:45.111 Test: blockdev comparev and writev ...passed 00:29:45.111 Test: blockdev nvme passthru rw ...passed 00:29:45.111 Test: blockdev nvme passthru vendor specific ...passed 00:29:45.111 Test: blockdev nvme admin passthru ...passed 00:29:45.111 Test: blockdev copy ...passed 00:29:45.111 00:29:45.111 Run Summary: Type Total Ran Passed Failed Inactive 00:29:45.111 suites 1 1 n/a 0 0 00:29:45.111 tests 23 23 23 0 0 00:29:45.111 asserts 130 130 130 0 n/a 00:29:45.111 00:29:45.111 Elapsed time = 0.282 seconds 00:29:45.111 0 00:29:45.111 05:20:45 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 135563 00:29:45.111 05:20:45 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 135563 ']' 00:29:45.111 05:20:45 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 135563 00:29:45.111 05:20:45 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:29:45.111 05:20:45 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:45.111 05:20:45 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 135563 00:29:45.111 05:20:45 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:45.111 05:20:45 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:45.111 05:20:45 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 135563' 00:29:45.111 killing process with pid 135563 00:29:45.111 05:20:45 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@967 -- # kill 135563 00:29:45.111 05:20:45 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@972 -- # wait 135563 00:29:45.369 05:20:45 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:29:45.369 00:29:45.369 real 0m1.518s 00:29:45.369 user 0m3.867s 00:29:45.369 sys 0m0.289s 00:29:45.369 05:20:45 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:45.369 05:20:45 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:45.369 ************************************ 00:29:45.369 END TEST bdev_bounds 00:29:45.369 ************************************ 00:29:45.369 05:20:45 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:29:45.369 05:20:45 blockdev_rbd -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:29:45.369 05:20:45 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:29:45.369 05:20:45 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:45.369 05:20:45 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:45.369 ************************************ 00:29:45.369 START TEST bdev_nbd 00:29:45.369 ************************************ 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Ceph0') 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Ceph0') 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=135624 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 135624 /var/tmp/spdk-nbd.sock 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 135624 ']' 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:45.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:45.370 05:20:45 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:45.370 [2024-07-23 05:20:45.503002] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:29:45.370 [2024-07-23 05:20:45.503238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.628 [2024-07-23 05:20:45.639492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.628 [2024-07-23 05:20:45.729869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.887 [2024-07-23 05:20:45.906984] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Ceph0 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Ceph0') 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Ceph0 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Ceph0') 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:46.454 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:46.712 1+0 records in 00:29:46.712 1+0 records out 00:29:46.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732233 s, 5.6 MB/s 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:46.712 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:46.971 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:46.971 { 00:29:46.971 "nbd_device": "/dev/nbd0", 00:29:46.971 "bdev_name": "Ceph0" 00:29:46.971 } 00:29:46.971 ]' 00:29:46.971 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:46.971 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:46.971 05:20:46 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:46.971 { 00:29:46.971 "nbd_device": "/dev/nbd0", 00:29:46.971 "bdev_name": "Ceph0" 00:29:46.971 } 00:29:46.971 ]' 00:29:46.971 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:46.971 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:46.971 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:46.971 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:46.971 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:46.971 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:46.971 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:47.230 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:47.230 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:47.230 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:47.230 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:47.230 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:47.230 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:47.230 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:47.230 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:47.230 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:47.230 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:47.230 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Ceph0') 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Ceph0') 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:47.489 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 /dev/nbd0 00:29:47.748 /dev/nbd0 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:47.748 1+0 records in 00:29:47.748 1+0 records out 00:29:47.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123852 s, 3.3 MB/s 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:47.748 05:20:47 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:48.007 { 00:29:48.007 "nbd_device": "/dev/nbd0", 00:29:48.007 "bdev_name": "Ceph0" 00:29:48.007 } 00:29:48.007 ]' 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:48.007 { 00:29:48.007 "nbd_device": "/dev/nbd0", 00:29:48.007 "bdev_name": "Ceph0" 00:29:48.007 } 00:29:48.007 ]' 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:48.007 256+0 records in 00:29:48.007 256+0 records out 00:29:48.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00899028 s, 117 MB/s 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:48.007 05:20:48 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:48.986 256+0 records in 00:29:48.986 256+0 records out 00:29:48.986 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.860043 s, 1.2 MB/s 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:48.986 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:49.260 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:49.260 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:49.260 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:49.260 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:49.260 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:49.260 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:49.260 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:49.260 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:49.260 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:49.260 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:49.260 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:49.519 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:49.778 malloc_lvol_verify 00:29:49.778 05:20:49 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:50.036 44853b73-7b0a-4bee-bc23-d0e71eacafb1 00:29:50.036 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:50.313 09bd433d-c1e3-417f-abc2-2d2cc94f15a3 00:29:50.313 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:50.592 /dev/nbd0 00:29:50.592 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:50.592 Discarding device blocks: 0/4096mke2fs 1.46.5 (30-Dec-2021) 00:29:50.592 done 00:29:50.592 Creating filesystem with 4096 1k blocks and 1024 inodes 00:29:50.592 00:29:50.592 Allocating group tables: 0/1 done 00:29:50.592 Writing inode tables: 0/1 done 00:29:50.592 Creating journal (1024 blocks): done 00:29:50.592 Writing superblocks and filesystem accounting information: 0/1 done 00:29:50.592 00:29:50.592 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:50.592 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:50.592 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:50.592 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:50.592 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:50.592 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:50.592 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.592 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 135624 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 135624 ']' 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 135624 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 135624 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:50.851 killing process with pid 135624 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 135624' 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@967 -- # kill 135624 00:29:50.851 05:20:50 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@972 -- # wait 135624 00:29:51.110 05:20:51 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:29:51.110 00:29:51.110 real 0m5.747s 00:29:51.110 user 0m8.209s 00:29:51.110 sys 0m1.494s 00:29:51.110 05:20:51 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:51.110 05:20:51 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:51.110 ************************************ 00:29:51.110 END TEST bdev_nbd 00:29:51.110 ************************************ 00:29:51.110 05:20:51 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:29:51.110 05:20:51 blockdev_rbd -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:29:51.110 05:20:51 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = nvme ']' 00:29:51.110 05:20:51 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = gpt ']' 00:29:51.110 05:20:51 blockdev_rbd -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:29:51.110 05:20:51 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:51.110 05:20:51 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.110 05:20:51 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:51.110 ************************************ 00:29:51.111 START TEST bdev_fio 00:29:51.111 ************************************ 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:29:51.111 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Ceph0]' 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Ceph0 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:29:51.111 ************************************ 00:29:51.111 START TEST bdev_fio_rw_verify 00:29:51.111 ************************************ 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:51.111 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:51.380 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:51.380 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:51.380 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.380 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:51.380 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:51.380 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:51.380 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:51.380 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:51.380 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:51.380 05:20:51 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:51.380 job_Ceph0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:51.380 fio-3.35 00:29:51.380 Starting 1 thread 00:30:03.580 00:30:03.580 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=135866: Tue Jul 23 05:21:02 2024 00:30:03.580 read: IOPS=614, BW=2457KiB/s (2516kB/s)(24.0MiB/10003msec) 00:30:03.580 slat (usec): min=3, max=305, avg=10.86, stdev=18.03 00:30:03.580 clat (usec): min=165, max=603578, avg=4070.16, stdev=37043.88 00:30:03.580 lat (usec): min=178, max=603583, avg=4081.03, stdev=37043.66 00:30:03.580 clat percentiles (usec): 00:30:03.580 | 50.000th=[ 453], 99.000th=[ 80217], 99.900th=[557843], 00:30:03.580 | 99.990th=[599786], 99.999th=[599786] 00:30:03.580 write: IOPS=655, BW=2622KiB/s (2685kB/s)(25.6MiB/10003msec); 0 zone resets 00:30:03.580 slat (usec): min=14, max=418, avg=19.59, stdev=11.03 00:30:03.580 clat (msec): min=2, max=147, avg= 8.33, stdev=16.57 00:30:03.580 lat (msec): min=2, max=147, avg= 8.35, stdev=16.57 00:30:03.580 clat percentiles (msec): 00:30:03.580 | 50.000th=[ 5], 99.000th=[ 101], 99.900th=[ 128], 99.990th=[ 148], 00:30:03.580 | 99.999th=[ 148] 00:30:03.580 bw ( KiB/s): min= 72, max= 6760, per=100.00%, avg=2734.67, stdev=2126.75, samples=18 00:30:03.580 iops : min= 18, max= 1690, avg=683.67, stdev=531.69, samples=18 00:30:03.580 lat (usec) : 250=1.09%, 500=28.97%, 750=14.52%, 1000=2.02% 00:30:03.580 lat (msec) : 2=0.58%, 4=10.15%, 10=39.23%, 20=0.52%, 50=0.36% 00:30:03.580 lat (msec) : 100=1.65%, 250=0.65%, 500=0.17%, 750=0.10% 00:30:03.580 cpu : usr=98.98%, sys=0.21%, ctx=1014, majf=0, minf=5 00:30:03.580 IO depths : 1=0.1%, 2=0.1%, 4=30.1%, 8=69.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:03.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.580 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.580 issued rwts: total=6144,6557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:03.580 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:03.580 00:30:03.580 Run status group 0 (all jobs): 00:30:03.580 READ: bw=2457KiB/s (2516kB/s), 2457KiB/s-2457KiB/s (2516kB/s-2516kB/s), io=24.0MiB (25.2MB), run=10003-10003msec 00:30:03.580 WRITE: bw=2622KiB/s (2685kB/s), 2622KiB/s-2622KiB/s (2685kB/s-2685kB/s), io=25.6MiB (26.9MB), run=10003-10003msec 00:30:03.580 00:30:03.580 real 0m10.915s 00:30:03.580 user 0m11.117s 00:30:03.580 sys 0m0.716s 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:30:03.580 ************************************ 00:30:03.580 END TEST bdev_fio_rw_verify 00:30:03.580 ************************************ 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "e1cee80a-82c1-479c-bba1-c45225badaf2"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "e1cee80a-82c1-479c-bba1-c45225badaf2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Ceph0 ]] 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "e1cee80a-82c1-479c-bba1-c45225badaf2"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "e1cee80a-82c1-479c-bba1-c45225badaf2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Ceph0]' 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Ceph0 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:03.580 ************************************ 00:30:03.580 START TEST bdev_fio_trim 00:30:03.580 ************************************ 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:30:03.580 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:03.581 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:03.581 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:03.581 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:03.581 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:03.581 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:03.581 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:03.581 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:03.581 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:03.581 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:03.581 05:21:02 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:30:03.581 job_Ceph0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:30:03.581 fio-3.35 00:30:03.581 Starting 1 thread 00:30:13.581 00:30:13.581 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=136043: Tue Jul 23 05:21:13 2024 00:30:13.581 write: IOPS=907, BW=3630KiB/s (3717kB/s)(35.5MiB/10011msec); 0 zone resets 00:30:13.581 slat (usec): min=4, max=446, avg=11.39, stdev=24.43 00:30:13.581 clat (usec): min=2279, max=38104, avg=8701.60, stdev=3174.56 00:30:13.581 lat (usec): min=2284, max=38109, avg=8712.99, stdev=3175.09 00:30:13.581 clat percentiles (usec): 00:30:13.581 | 50.000th=[ 8160], 99.000th=[14353], 99.900th=[27395], 99.990th=[38011], 00:30:13.581 | 99.999th=[38011] 00:30:13.581 bw ( KiB/s): min= 2808, max= 4768, per=100.00%, avg=3632.20, stdev=726.97, samples=20 00:30:13.581 iops : min= 702, max= 1192, avg=907.95, stdev=181.73, samples=20 00:30:13.581 trim: IOPS=907, BW=3630KiB/s (3717kB/s)(35.5MiB/10011msec); 0 zone resets 00:30:13.581 slat (usec): min=3, max=561, avg= 8.80, stdev=16.26 00:30:13.581 clat (usec): min=3, max=8444, avg=88.27, stdev=186.94 00:30:13.581 lat (usec): min=11, max=8454, avg=97.06, stdev=186.91 00:30:13.581 clat percentiles (usec): 00:30:13.581 | 50.000th=[ 78], 99.000th=[ 231], 99.900th=[ 351], 99.990th=[ 8455], 00:30:13.581 | 99.999th=[ 8455] 00:30:13.581 bw ( KiB/s): min= 2808, max= 4824, per=100.00%, avg=3635.40, stdev=732.33, samples=20 00:30:13.581 iops : min= 702, max= 1206, avg=908.75, stdev=183.07, samples=20 00:30:13.581 lat (usec) : 4=0.03%, 10=0.98%, 20=10.03%, 50=10.30%, 100=8.43% 00:30:13.581 lat (usec) : 250=19.91%, 500=0.29% 00:30:13.581 lat (msec) : 4=3.30%, 10=27.82%, 20=18.78%, 50=0.13% 00:30:13.581 cpu : usr=98.77%, sys=0.15%, ctx=2053, majf=0, minf=4 00:30:13.581 IO depths : 1=0.1%, 2=0.1%, 4=14.4%, 8=85.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.581 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.581 issued rwts: total=0,9085,9085,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.581 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:13.581 00:30:13.581 Run status group 0 (all jobs): 00:30:13.581 WRITE: bw=3630KiB/s (3717kB/s), 3630KiB/s-3630KiB/s (3717kB/s-3717kB/s), io=35.5MiB (37.2MB), run=10011-10011msec 00:30:13.581 TRIM: bw=3630KiB/s (3717kB/s), 3630KiB/s-3630KiB/s (3717kB/s-3717kB/s), io=35.5MiB (37.2MB), run=10011-10011msec 00:30:13.581 00:30:13.581 real 0m10.924s 00:30:13.581 user 0m11.031s 00:30:13.581 sys 0m0.662s 00:30:13.581 05:21:13 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:13.581 05:21:13 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:30:13.581 ************************************ 00:30:13.581 END TEST bdev_fio_trim 00:30:13.581 ************************************ 00:30:13.581 05:21:13 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:30:13.581 05:21:13 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:30:13.581 05:21:13 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:13.581 /home/vagrant/spdk_repo/spdk 00:30:13.581 05:21:13 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:30:13.581 05:21:13 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:30:13.581 00:30:13.581 real 0m22.123s 00:30:13.581 user 0m22.310s 00:30:13.581 sys 0m1.485s 00:30:13.581 ************************************ 00:30:13.581 END TEST bdev_fio 00:30:13.581 ************************************ 00:30:13.581 05:21:13 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:13.581 05:21:13 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:13.581 05:21:13 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:30:13.581 05:21:13 blockdev_rbd -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:13.581 05:21:13 blockdev_rbd -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:13.581 05:21:13 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:30:13.581 05:21:13 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.581 05:21:13 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:13.581 ************************************ 00:30:13.581 START TEST bdev_verify 00:30:13.581 ************************************ 00:30:13.581 05:21:13 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:13.581 [2024-07-23 05:21:13.489528] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:13.581 [2024-07-23 05:21:13.489628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136183 ] 00:30:13.581 [2024-07-23 05:21:13.626761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:13.581 [2024-07-23 05:21:13.721647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.581 [2024-07-23 05:21:13.721657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.857 [2024-07-23 05:21:18.745548] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:30:18.857 Running I/O for 5 seconds... 00:30:24.123 00:30:24.123 Latency(us) 00:30:24.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.124 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:24.124 Verification LBA range: start 0x0 length 0x1f400 00:30:24.124 Ceph0 : 5.03 2364.50 9.24 0.00 0.00 53897.35 3619.37 671088.64 00:30:24.124 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:24.124 Verification LBA range: start 0x1f400 length 0x1f400 00:30:24.124 Ceph0 : 5.03 2324.87 9.08 0.00 0.00 54935.67 2755.49 690153.66 00:30:24.124 =================================================================================================================== 00:30:24.124 Total : 4689.37 18.32 0.00 0.00 54412.53 2755.49 690153.66 00:30:24.124 00:30:24.124 real 0m10.599s 00:30:24.124 user 0m16.743s 00:30:24.124 sys 0m0.755s 00:30:24.124 05:21:24 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:24.124 05:21:24 blockdev_rbd.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:30:24.124 ************************************ 00:30:24.124 END TEST bdev_verify 00:30:24.124 ************************************ 00:30:24.124 05:21:24 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:30:24.124 05:21:24 blockdev_rbd -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:24.124 05:21:24 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:30:24.124 05:21:24 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:24.124 05:21:24 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:24.124 ************************************ 00:30:24.124 START TEST bdev_verify_big_io 00:30:24.124 ************************************ 00:30:24.124 05:21:24 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:24.124 [2024-07-23 05:21:24.126971] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:24.124 [2024-07-23 05:21:24.127095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136322 ] 00:30:24.124 [2024-07-23 05:21:24.268725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:24.382 [2024-07-23 05:21:24.359056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.382 [2024-07-23 05:21:24.359069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.382 [2024-07-23 05:21:24.537982] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:30:24.382 Running I/O for 5 seconds... 00:30:29.686 00:30:29.686 Latency(us) 00:30:29.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.686 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:29.686 Verification LBA range: start 0x0 length 0x1f40 00:30:29.686 Ceph0 : 5.09 633.33 39.58 0.00 0.00 198115.47 1407.53 392739.37 00:30:29.686 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:29.686 Verification LBA range: start 0x1f40 length 0x1f40 00:30:29.687 Ceph0 : 5.10 633.72 39.61 0.00 0.00 197410.94 2740.60 415617.40 00:30:29.687 =================================================================================================================== 00:30:29.687 Total : 1267.05 79.19 0.00 0.00 197762.77 1407.53 415617.40 00:30:29.687 00:30:29.687 real 0m5.816s 00:30:29.687 user 0m11.533s 00:30:29.687 sys 0m0.661s 00:30:29.687 05:21:29 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:29.687 05:21:29 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:30:29.687 ************************************ 00:30:29.687 END TEST bdev_verify_big_io 00:30:29.687 ************************************ 00:30:29.945 05:21:29 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:30:29.945 05:21:29 blockdev_rbd -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:29.945 05:21:29 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:30:29.945 05:21:29 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:29.945 05:21:29 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:29.945 ************************************ 00:30:29.945 START TEST bdev_write_zeroes 00:30:29.945 ************************************ 00:30:29.945 05:21:29 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:29.945 [2024-07-23 05:21:29.997440] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:29.945 [2024-07-23 05:21:29.997540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136417 ] 00:30:29.945 [2024-07-23 05:21:30.135904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.204 [2024-07-23 05:21:30.223853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.204 [2024-07-23 05:21:30.397214] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:30:30.204 Running I/O for 1 seconds... 00:30:31.573 00:30:31.573 Latency(us) 00:30:31.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.573 Job: Ceph0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:31.573 Ceph0 : 1.35 3883.34 15.17 0.00 0.00 32899.34 4676.89 560511.53 00:30:31.573 =================================================================================================================== 00:30:31.573 Total : 3883.34 15.17 0.00 0.00 32899.34 4676.89 560511.53 00:30:31.830 00:30:31.830 real 0m2.060s 00:30:31.830 user 0m1.958s 00:30:31.830 sys 0m0.219s 00:30:31.830 05:21:31 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:31.830 05:21:31 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:30:31.830 ************************************ 00:30:31.830 END TEST bdev_write_zeroes 00:30:31.830 ************************************ 00:30:31.830 05:21:32 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:30:31.830 05:21:32 blockdev_rbd -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:31.830 05:21:32 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:30:31.830 05:21:32 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.830 05:21:32 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:31.830 ************************************ 00:30:31.830 START TEST bdev_json_nonenclosed 00:30:31.830 ************************************ 00:30:31.830 05:21:32 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:32.087 [2024-07-23 05:21:32.108926] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:32.087 [2024-07-23 05:21:32.109057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136473 ] 00:30:32.087 [2024-07-23 05:21:32.249494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.345 [2024-07-23 05:21:32.345675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.345 [2024-07-23 05:21:32.345766] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:32.345 [2024-07-23 05:21:32.345787] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:32.345 [2024-07-23 05:21:32.345798] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:32.345 00:30:32.345 real 0m0.404s 00:30:32.345 user 0m0.229s 00:30:32.345 sys 0m0.072s 00:30:32.345 05:21:32 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:30:32.345 05:21:32 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.345 05:21:32 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:30:32.345 ************************************ 00:30:32.345 END TEST bdev_json_nonenclosed 00:30:32.345 ************************************ 00:30:32.345 05:21:32 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 234 00:30:32.345 05:21:32 blockdev_rbd -- bdev/blockdev.sh@781 -- # true 00:30:32.345 05:21:32 blockdev_rbd -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:32.345 05:21:32 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:30:32.345 05:21:32 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.345 05:21:32 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:32.345 ************************************ 00:30:32.345 START TEST bdev_json_nonarray 00:30:32.345 ************************************ 00:30:32.345 05:21:32 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:32.603 [2024-07-23 05:21:32.569633] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:32.603 [2024-07-23 05:21:32.569730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136496 ] 00:30:32.604 [2024-07-23 05:21:32.706599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.604 [2024-07-23 05:21:32.814952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.604 [2024-07-23 05:21:32.815050] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:32.604 [2024-07-23 05:21:32.815071] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:32.604 [2024-07-23 05:21:32.815082] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:32.862 00:30:32.862 real 0m0.421s 00:30:32.862 user 0m0.244s 00:30:32.862 sys 0m0.073s 00:30:32.862 05:21:32 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:30:32.862 05:21:32 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.862 ************************************ 00:30:32.862 END TEST bdev_json_nonarray 00:30:32.862 ************************************ 00:30:32.862 05:21:32 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:30:32.862 05:21:32 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 234 00:30:32.862 05:21:32 blockdev_rbd -- bdev/blockdev.sh@784 -- # true 00:30:32.862 05:21:32 blockdev_rbd -- bdev/blockdev.sh@786 -- # [[ rbd == bdev ]] 00:30:32.862 05:21:32 blockdev_rbd -- bdev/blockdev.sh@793 -- # [[ rbd == gpt ]] 00:30:32.862 05:21:32 blockdev_rbd -- bdev/blockdev.sh@797 -- # [[ rbd == crypto_sw ]] 00:30:32.862 05:21:32 blockdev_rbd -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:30:32.862 05:21:32 blockdev_rbd -- bdev/blockdev.sh@810 -- # cleanup 00:30:32.862 05:21:32 blockdev_rbd -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:32.862 05:21:32 blockdev_rbd -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:32.862 05:21:32 blockdev_rbd -- bdev/blockdev.sh@26 -- # [[ rbd == rbd ]] 00:30:32.862 05:21:32 blockdev_rbd -- bdev/blockdev.sh@27 -- # rbd_cleanup 00:30:32.862 05:21:32 blockdev_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:30:32.862 05:21:32 blockdev_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:30:32.862 + base_dir=/var/tmp/ceph 00:30:32.862 + image=/var/tmp/ceph/ceph_raw.img 00:30:32.862 + dev=/dev/loop200 00:30:32.862 + pkill -9 ceph 00:30:32.862 + sleep 3 00:30:36.175 + umount /dev/loop200p2 00:30:36.175 + losetup -d /dev/loop200 00:30:36.175 + rm -rf /var/tmp/ceph 00:30:36.175 05:21:36 blockdev_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:30:36.175 05:21:36 blockdev_rbd -- bdev/blockdev.sh@30 -- # [[ rbd == daos ]] 00:30:36.175 05:21:36 blockdev_rbd -- bdev/blockdev.sh@34 -- # [[ rbd = \g\p\t ]] 00:30:36.175 05:21:36 blockdev_rbd -- bdev/blockdev.sh@40 -- # [[ rbd == xnvme ]] 00:30:36.175 00:30:36.175 real 1m13.933s 00:30:36.175 user 1m28.370s 00:30:36.175 sys 0m6.965s 00:30:36.175 05:21:36 blockdev_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:36.175 05:21:36 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:36.175 ************************************ 00:30:36.175 END TEST blockdev_rbd 00:30:36.175 ************************************ 00:30:36.435 05:21:36 -- common/autotest_common.sh@1142 -- # return 0 00:30:36.435 05:21:36 -- spdk/autotest.sh@332 -- # run_test spdkcli_rbd /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:30:36.435 05:21:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:36.435 05:21:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:36.435 05:21:36 -- common/autotest_common.sh@10 -- # set +x 00:30:36.435 ************************************ 00:30:36.435 START TEST spdkcli_rbd 00:30:36.435 ************************************ 00:30:36.435 05:21:36 spdkcli_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:30:36.435 * Looking for test storage... 00:30:36.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:30:36.435 05:21:36 spdkcli_rbd -- spdkcli/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:30:36.435 05:21:36 spdkcli_rbd -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:30:36.435 05:21:36 spdkcli_rbd -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:30:36.435 05:21:36 spdkcli_rbd -- spdkcli/rbd.sh@11 -- # MATCH_FILE=spdkcli_rbd.test 00:30:36.435 05:21:36 spdkcli_rbd -- spdkcli/rbd.sh@12 -- # SPDKCLI_BRANCH=/bdevs/rbd 00:30:36.435 05:21:36 spdkcli_rbd -- spdkcli/rbd.sh@14 -- # trap 'rbd_cleanup; cleanup' EXIT 00:30:36.435 05:21:36 spdkcli_rbd -- spdkcli/rbd.sh@15 -- # timing_enter run_spdk_tgt 00:30:36.435 05:21:36 spdkcli_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:36.435 05:21:36 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:36.435 05:21:36 spdkcli_rbd -- spdkcli/rbd.sh@16 -- # run_spdk_tgt 00:30:36.435 05:21:36 spdkcli_rbd -- spdkcli/common.sh@27 -- # spdk_tgt_pid=136612 00:30:36.435 05:21:36 spdkcli_rbd -- spdkcli/common.sh@28 -- # waitforlisten 136612 00:30:36.435 05:21:36 spdkcli_rbd -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:30:36.435 05:21:36 spdkcli_rbd -- common/autotest_common.sh@829 -- # '[' -z 136612 ']' 00:30:36.435 05:21:36 spdkcli_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.435 05:21:36 spdkcli_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:36.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.435 05:21:36 spdkcli_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.435 05:21:36 spdkcli_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:36.435 05:21:36 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:36.435 [2024-07-23 05:21:36.597357] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 22.11.4 initialization... 00:30:36.435 [2024-07-23 05:21:36.597456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136612 ] 00:30:36.694 [2024-07-23 05:21:36.734167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:36.694 [2024-07-23 05:21:36.823145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.694 [2024-07-23 05:21:36.823159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.630 05:21:37 spdkcli_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:37.630 05:21:37 spdkcli_rbd -- common/autotest_common.sh@862 -- # return 0 00:30:37.630 05:21:37 spdkcli_rbd -- spdkcli/rbd.sh@17 -- # timing_exit run_spdk_tgt 00:30:37.630 05:21:37 spdkcli_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:37.630 05:21:37 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:37.630 05:21:37 spdkcli_rbd -- spdkcli/rbd.sh@19 -- # timing_enter spdkcli_create_rbd_config 00:30:37.630 05:21:37 spdkcli_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:37.630 05:21:37 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:30:37.630 05:21:37 spdkcli_rbd -- spdkcli/rbd.sh@20 -- # rbd_cleanup 00:30:37.630 05:21:37 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:30:37.630 05:21:37 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:30:37.630 + base_dir=/var/tmp/ceph 00:30:37.630 + image=/var/tmp/ceph/ceph_raw.img 00:30:37.630 + dev=/dev/loop200 00:30:37.630 + pkill -9 ceph 00:30:37.630 + sleep 3 00:30:40.917 + umount /dev/loop200p2 00:30:40.917 umount: /dev/loop200p2: no mount point specified. 00:30:40.917 + losetup -d /dev/loop200 00:30:40.917 losetup: /dev/loop200: detach failed: No such device or address 00:30:40.917 + rm -rf /var/tmp/ceph 00:30:40.917 05:21:40 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:30:40.917 05:21:40 spdkcli_rbd -- spdkcli/rbd.sh@21 -- # rbd_setup 127.0.0.1 00:30:40.917 05:21:40 spdkcli_rbd -- common/autotest_common.sh@1005 -- # '[' -z 127.0.0.1 ']' 00:30:40.917 05:21:40 spdkcli_rbd -- common/autotest_common.sh@1009 -- # '[' -n '' ']' 00:30:40.917 05:21:40 spdkcli_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:30:40.917 05:21:40 spdkcli_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:30:40.917 05:21:40 spdkcli_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:30:40.917 05:21:40 spdkcli_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:30:40.917 05:21:40 spdkcli_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:30:40.917 05:21:40 spdkcli_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:30:40.917 05:21:40 spdkcli_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:30:40.917 05:21:40 spdkcli_rbd -- common/autotest_common.sh@1022 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:30:40.917 + base_dir=/var/tmp/ceph 00:30:40.917 + image=/var/tmp/ceph/ceph_raw.img 00:30:40.917 + dev=/dev/loop200 00:30:40.917 + pkill -9 ceph 00:30:40.917 + sleep 3 00:30:43.449 + umount /dev/loop200p2 00:30:43.707 umount: /dev/loop200p2: no mount point specified. 00:30:43.707 + losetup -d /dev/loop200 00:30:43.707 losetup: /dev/loop200: detach failed: No such device or address 00:30:43.707 + rm -rf /var/tmp/ceph 00:30:43.707 05:21:43 spdkcli_rbd -- common/autotest_common.sh@1023 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:30:43.707 + set -e 00:30:43.707 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:30:43.707 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:30:43.707 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:30:43.707 + base_dir=/var/tmp/ceph 00:30:43.707 + mon_ip=127.0.0.1 00:30:43.707 + mon_dir=/var/tmp/ceph/mon.a 00:30:43.707 + pid_dir=/var/tmp/ceph/pid 00:30:43.707 + ceph_conf=/var/tmp/ceph/ceph.conf 00:30:43.707 + mnt_dir=/var/tmp/ceph/mnt 00:30:43.707 + image=/var/tmp/ceph_raw.img 00:30:43.707 + dev=/dev/loop200 00:30:43.707 + modprobe loop 00:30:43.707 + umount /dev/loop200p2 00:30:43.707 umount: /dev/loop200p2: no mount point specified. 00:30:43.707 + true 00:30:43.707 + losetup -d /dev/loop200 00:30:43.707 losetup: /dev/loop200: detach failed: No such device or address 00:30:43.707 + true 00:30:43.707 + '[' -d /var/tmp/ceph ']' 00:30:43.707 + mkdir /var/tmp/ceph 00:30:43.707 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:30:43.707 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:30:43.707 + fallocate -l 4G /var/tmp/ceph_raw.img 00:30:43.707 + mknod /dev/loop200 b 7 200 00:30:43.707 mknod: /dev/loop200: File exists 00:30:43.707 + true 00:30:43.707 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:30:43.707 + PARTED='parted -s' 00:30:43.707 + SGDISK=sgdisk 00:30:43.707 + echo 'Partitioning /dev/loop200' 00:30:43.707 Partitioning /dev/loop200 00:30:43.707 + parted -s /dev/loop200 mktable gpt 00:30:43.707 + sleep 2 00:30:46.241 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:30:46.241 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:30:46.241 Setting name on /dev/loop200 00:30:46.241 + partno=0 00:30:46.241 + echo 'Setting name on /dev/loop200' 00:30:46.241 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:30:46.807 Warning: The kernel is still using the old partition table. 00:30:46.807 The new table will be used at the next reboot or after you 00:30:46.807 run partprobe(8) or kpartx(8) 00:30:46.807 The operation has completed successfully. 00:30:46.807 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:30:48.184 Warning: The kernel is still using the old partition table. 00:30:48.184 The new table will be used at the next reboot or after you 00:30:48.184 run partprobe(8) or kpartx(8) 00:30:48.184 The operation has completed successfully. 00:30:48.184 + kpartx /dev/loop200 00:30:48.184 loop200p1 : 0 4192256 /dev/loop200 2048 00:30:48.184 loop200p2 : 0 4192256 /dev/loop200 4194304 00:30:48.184 ++ ceph -v 00:30:48.184 ++ awk '{print $3}' 00:30:48.184 + ceph_version=17.2.7 00:30:48.184 + ceph_maj=17 00:30:48.184 + '[' 17 -gt 12 ']' 00:30:48.184 + update_config=true 00:30:48.184 + rm -f /var/log/ceph/ceph-mon.a.log 00:30:48.184 + set_min_mon_release='--set-min-mon-release 14' 00:30:48.184 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:30:48.184 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:30:48.184 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:30:48.184 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:30:48.184 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:30:48.184 = sectsz=512 attr=2, projid32bit=1 00:30:48.184 = crc=1 finobt=1, sparse=1, rmapbt=0 00:30:48.184 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:30:48.184 data = bsize=4096 blocks=524032, imaxpct=25 00:30:48.184 = sunit=0 swidth=0 blks 00:30:48.184 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:30:48.184 log =internal log bsize=4096 blocks=16384, version=2 00:30:48.184 = sectsz=512 sunit=0 blks, lazy-count=1 00:30:48.184 realtime =none extsz=4096 blocks=0, rtextents=0 00:30:48.184 Discarding blocks...Done. 00:30:48.185 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:30:48.185 + cat 00:30:48.185 + rm -rf '/var/tmp/ceph/mon.a/*' 00:30:48.185 + mkdir -p /var/tmp/ceph/mon.a 00:30:48.185 + mkdir -p /var/tmp/ceph/pid 00:30:48.185 + rm -f /etc/ceph/ceph.client.admin.keyring 00:30:48.185 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:30:48.185 creating /var/tmp/ceph/keyring 00:30:48.185 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:30:48.185 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:30:48.185 monmaptool: monmap file /var/tmp/ceph/monmap 00:30:48.185 monmaptool: generated fsid a714f7b3-b4b6-4320-ba44-d0443c0d031e 00:30:48.185 setting min_mon_release = octopus 00:30:48.185 epoch 0 00:30:48.185 fsid a714f7b3-b4b6-4320-ba44-d0443c0d031e 00:30:48.185 last_changed 2024-07-23T05:21:48.262350+0000 00:30:48.185 created 2024-07-23T05:21:48.262350+0000 00:30:48.185 min_mon_release 15 (octopus) 00:30:48.185 election_strategy: 1 00:30:48.185 0: v2:127.0.0.1:12046/0 mon.a 00:30:48.185 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:30:48.185 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:30:48.185 + '[' true = true ']' 00:30:48.185 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:30:48.185 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:30:48.185 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:30:48.185 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:30:48.185 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:30:48.185 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:30:48.185 ++ hostname 00:30:48.185 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:30:48.443 + true 00:30:48.443 + '[' true = true ']' 00:30:48.443 + ceph-conf --name mon.a --show-config-value log_file 00:30:48.443 /var/log/ceph/ceph-mon.a.log 00:30:48.443 ++ ceph -s 00:30:48.443 ++ grep id 00:30:48.443 ++ awk '{print $2}' 00:30:48.701 + fsid=a714f7b3-b4b6-4320-ba44-d0443c0d031e 00:30:48.701 + sed -i 's/perf = true/perf = true\n\tfsid = a714f7b3-b4b6-4320-ba44-d0443c0d031e \n/g' /var/tmp/ceph/ceph.conf 00:30:48.701 + (( ceph_maj < 18 )) 00:30:48.701 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:30:48.701 + cat /var/tmp/ceph/ceph.conf 00:30:48.701 [global] 00:30:48.701 debug_lockdep = 0/0 00:30:48.701 debug_context = 0/0 00:30:48.701 debug_crush = 0/0 00:30:48.701 debug_buffer = 0/0 00:30:48.701 debug_timer = 0/0 00:30:48.701 debug_filer = 0/0 00:30:48.701 debug_objecter = 0/0 00:30:48.701 debug_rados = 0/0 00:30:48.701 debug_rbd = 0/0 00:30:48.701 debug_ms = 0/0 00:30:48.701 debug_monc = 0/0 00:30:48.701 debug_tp = 0/0 00:30:48.701 debug_auth = 0/0 00:30:48.701 debug_finisher = 0/0 00:30:48.701 debug_heartbeatmap = 0/0 00:30:48.701 debug_perfcounter = 0/0 00:30:48.701 debug_asok = 0/0 00:30:48.701 debug_throttle = 0/0 00:30:48.701 debug_mon = 0/0 00:30:48.701 debug_paxos = 0/0 00:30:48.701 debug_rgw = 0/0 00:30:48.701 00:30:48.701 perf = true 00:30:48.701 osd objectstore = filestore 00:30:48.701 00:30:48.701 fsid = a714f7b3-b4b6-4320-ba44-d0443c0d031e 00:30:48.701 00:30:48.701 mutex_perf_counter = false 00:30:48.701 throttler_perf_counter = false 00:30:48.701 rbd cache = false 00:30:48.701 mon_allow_pool_delete = true 00:30:48.701 00:30:48.701 osd_pool_default_size = 1 00:30:48.701 00:30:48.701 [mon] 00:30:48.701 mon_max_pool_pg_num=166496 00:30:48.701 mon_osd_max_split_count = 10000 00:30:48.701 mon_pg_warn_max_per_osd = 10000 00:30:48.701 00:30:48.701 [osd] 00:30:48.701 osd_op_threads = 64 00:30:48.701 filestore_queue_max_ops=5000 00:30:48.701 filestore_queue_committing_max_ops=5000 00:30:48.701 journal_max_write_entries=1000 00:30:48.701 journal_queue_max_ops=3000 00:30:48.701 objecter_inflight_ops=102400 00:30:48.701 filestore_wbthrottle_enable=false 00:30:48.701 filestore_queue_max_bytes=1048576000 00:30:48.701 filestore_queue_committing_max_bytes=1048576000 00:30:48.701 journal_max_write_bytes=1048576000 00:30:48.701 journal_queue_max_bytes=1048576000 00:30:48.702 ms_dispatch_throttle_bytes=1048576000 00:30:48.702 objecter_inflight_op_bytes=1048576000 00:30:48.702 filestore_max_sync_interval=10 00:30:48.702 osd_client_message_size_cap = 0 00:30:48.702 osd_client_message_cap = 0 00:30:48.702 osd_enable_op_tracker = false 00:30:48.702 filestore_fd_cache_size = 10240 00:30:48.702 filestore_fd_cache_shards = 64 00:30:48.702 filestore_op_threads = 16 00:30:48.702 osd_op_num_shards = 48 00:30:48.702 osd_op_num_threads_per_shard = 2 00:30:48.702 osd_pg_object_context_cache_count = 10240 00:30:48.702 filestore_odsync_write = True 00:30:48.702 journal_dynamic_throttle = True 00:30:48.702 00:30:48.702 [osd.0] 00:30:48.702 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:30:48.702 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:30:48.702 00:30:48.702 # add mon address 00:30:48.702 [mon.a] 00:30:48.702 mon addr = v2:127.0.0.1:12046 00:30:48.702 + i=0 00:30:48.702 + mkdir -p /var/tmp/ceph/mnt 00:30:48.702 ++ uuidgen 00:30:48.702 + uuid=8ab78d10-2e62-4925-8a7c-5a60bba692a0 00:30:48.702 + ceph -c /var/tmp/ceph/ceph.conf osd create 8ab78d10-2e62-4925-8a7c-5a60bba692a0 0 00:30:48.959 0 00:30:48.959 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 8ab78d10-2e62-4925-8a7c-5a60bba692a0 --check-needs-journal --no-mon-config 00:30:48.959 2024-07-23T05:21:49.056+0000 7ff075a6f400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:30:48.959 2024-07-23T05:21:49.057+0000 7ff075a6f400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:30:48.959 2024-07-23T05:21:49.094+0000 7ff075a6f400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 8ab78d10-2e62-4925-8a7c-5a60bba692a0, invalid (someone else's?) journal 00:30:48.959 2024-07-23T05:21:49.116+0000 7ff075a6f400 -1 journal do_read_entry(4096): bad header magic 00:30:48.959 2024-07-23T05:21:49.116+0000 7ff075a6f400 -1 journal do_read_entry(4096): bad header magic 00:30:49.220 ++ hostname 00:30:49.220 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:30:50.598 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:30:50.598 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:30:50.598 added key for osd.0 00:30:50.598 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:30:50.856 + class_dir=/lib64/rados-classes 00:30:50.856 + [[ -e /lib64/rados-classes ]] 00:30:50.856 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:30:51.448 + pkill -9 ceph-osd 00:30:51.448 + true 00:30:51.448 + sleep 2 00:30:53.349 + mkdir -p /var/tmp/ceph/pid 00:30:53.349 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:30:53.349 2024-07-23T05:21:53.433+0000 7f13a5c58400 -1 Falling back to public interface 00:30:53.349 2024-07-23T05:21:53.479+0000 7f13a5c58400 -1 journal do_read_entry(8192): bad header magic 00:30:53.349 2024-07-23T05:21:53.479+0000 7f13a5c58400 -1 journal do_read_entry(8192): bad header magic 00:30:53.349 2024-07-23T05:21:53.489+0000 7f13a5c58400 -1 osd.0 0 log_to_monitors true 00:30:54.282 05:21:54 spdkcli_rbd -- common/autotest_common.sh@1025 -- # ceph osd pool create rbd 128 00:30:55.655 pool 'rbd' created 00:30:55.655 05:21:55 spdkcli_rbd -- common/autotest_common.sh@1026 -- # rbd create foo --size 1000 00:30:58.939 05:21:58 spdkcli_rbd -- spdkcli/rbd.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py '"/bdevs/rbd create rbd foo 512'\'' '\''Ceph0'\'' True "/bdevs/rbd' create rbd foo 512 Ceph1 'True 00:30:58.939 timing_exit spdkcli_create_rbd_config 00:30:58.939 00:30:58.939 timing_enter spdkcli_check_match 00:30:58.939 check_match 00:30:58.939 timing_exit spdkcli_check_match 00:30:58.939 00:30:58.939 timing_enter spdkcli_clear_rbd_config 00:30:58.939 /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py "/bdevs/rbd' delete Ceph0 Ceph0 '"/bdevs/rbd delete_all'\'' '\''Ceph1'\'' ' 00:30:59.198 Executing command: [' ', True] 00:30:59.198 05:21:59 spdkcli_rbd -- spdkcli/rbd.sh@31 -- # rbd_cleanup 00:30:59.198 05:21:59 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:30:59.198 05:21:59 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:30:59.198 + base_dir=/var/tmp/ceph 00:30:59.198 + image=/var/tmp/ceph/ceph_raw.img 00:30:59.198 + dev=/dev/loop200 00:30:59.198 + pkill -9 ceph 00:30:59.198 + sleep 3 00:31:02.488 + umount /dev/loop200p2 00:31:02.488 + losetup -d /dev/loop200 00:31:02.488 + rm -rf /var/tmp/ceph 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:31:02.488 05:22:02 spdkcli_rbd -- spdkcli/rbd.sh@32 -- # timing_exit spdkcli_clear_rbd_config 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:02.488 05:22:02 spdkcli_rbd -- spdkcli/rbd.sh@34 -- # killprocess 136612 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@948 -- # '[' -z 136612 ']' 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@952 -- # kill -0 136612 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@953 -- # uname 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 136612 00:31:02.488 killing process with pid 136612 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 136612' 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@967 -- # kill 136612 00:31:02.488 05:22:02 spdkcli_rbd -- common/autotest_common.sh@972 -- # wait 136612 00:31:02.746 05:22:02 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # rbd_cleanup 00:31:02.746 05:22:02 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:31:02.746 05:22:02 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:31:02.746 + base_dir=/var/tmp/ceph 00:31:02.746 + image=/var/tmp/ceph/ceph_raw.img 00:31:02.746 + dev=/dev/loop200 00:31:02.746 + pkill -9 ceph 00:31:02.746 + sleep 3 00:31:06.028 + umount /dev/loop200p2 00:31:06.028 umount: /dev/loop200p2: no mount point specified. 00:31:06.028 + losetup -d /dev/loop200 00:31:06.028 losetup: /dev/loop200: detach failed: No such device or address 00:31:06.028 + rm -rf /var/tmp/ceph 00:31:06.028 05:22:05 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:31:06.028 05:22:05 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # cleanup 00:31:06.028 05:22:05 spdkcli_rbd -- spdkcli/common.sh@10 -- # '[' -n 136612 ']' 00:31:06.028 05:22:05 spdkcli_rbd -- spdkcli/common.sh@11 -- # killprocess 136612 00:31:06.028 05:22:05 spdkcli_rbd -- common/autotest_common.sh@948 -- # '[' -z 136612 ']' 00:31:06.028 05:22:05 spdkcli_rbd -- common/autotest_common.sh@952 -- # kill -0 136612 00:31:06.028 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (136612) - No such process 00:31:06.028 Process with pid 136612 is not found 00:31:06.028 05:22:05 spdkcli_rbd -- common/autotest_common.sh@975 -- # echo 'Process with pid 136612 is not found' 00:31:06.028 05:22:05 spdkcli_rbd -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:31:06.028 05:22:05 spdkcli_rbd -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:06.028 05:22:05 spdkcli_rbd -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:06.028 05:22:05 spdkcli_rbd -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_rbd.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:06.028 ************************************ 00:31:06.028 END TEST spdkcli_rbd 00:31:06.028 ************************************ 00:31:06.028 00:31:06.028 real 0m29.455s 00:31:06.028 user 0m54.673s 00:31:06.028 sys 0m1.355s 00:31:06.028 05:22:05 spdkcli_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:06.028 05:22:05 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:06.028 05:22:05 -- common/autotest_common.sh@1142 -- # return 0 00:31:06.028 05:22:05 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:06.028 05:22:05 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:06.028 05:22:05 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:06.028 05:22:05 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:06.028 05:22:05 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:06.028 05:22:05 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:06.028 05:22:05 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:06.028 05:22:05 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:06.028 05:22:05 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:06.028 05:22:05 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:06.028 05:22:05 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:06.028 05:22:05 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:06.028 05:22:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:06.028 05:22:05 -- common/autotest_common.sh@10 -- # set +x 00:31:06.028 05:22:05 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:06.028 05:22:05 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:06.028 05:22:05 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:06.028 05:22:05 -- common/autotest_common.sh@10 -- # set +x 00:31:07.404 INFO: APP EXITING 00:31:07.404 INFO: killing all VMs 00:31:07.404 INFO: killing vhost app 00:31:07.404 INFO: EXIT DONE 00:31:07.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:07.664 Waiting for block devices as requested 00:31:07.664 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:07.934 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:08.500 0000:00:10.0 (1b36 0010): Active devices: data@nvme1n1, so not binding PCI dev 00:31:08.501 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:08.501 Cleaning 00:31:08.501 Removing: /var/run/dpdk/spdk0/config 00:31:08.501 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:08.760 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:08.760 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:08.760 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:08.760 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:08.760 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:08.760 Removing: /var/run/dpdk/spdk1/config 00:31:08.760 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:08.760 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:08.760 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:08.760 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:08.760 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:08.760 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:08.760 Removing: /dev/shm/iscsi_trace.pid88648 00:31:08.760 Removing: /dev/shm/spdk_tgt_trace.pid70746 00:31:08.760 Removing: /var/run/dpdk/spdk0 00:31:08.760 Removing: /var/run/dpdk/spdk1 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133010 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133310 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133350 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133427 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133482 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133542 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133703 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133742 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133767 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133784 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133804 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133877 00:31:08.760 Removing: /var/run/dpdk/spdk_pid133915 00:31:08.760 Removing: /var/run/dpdk/spdk_pid134122 00:31:08.761 Removing: /var/run/dpdk/spdk_pid134414 00:31:08.761 Removing: /var/run/dpdk/spdk_pid134654 00:31:08.761 Removing: /var/run/dpdk/spdk_pid135518 00:31:08.761 Removing: /var/run/dpdk/spdk_pid135563 00:31:08.761 Removing: /var/run/dpdk/spdk_pid135836 00:31:08.761 Removing: /var/run/dpdk/spdk_pid136019 00:31:08.761 Removing: /var/run/dpdk/spdk_pid136183 00:31:08.761 Removing: /var/run/dpdk/spdk_pid136322 00:31:08.761 Removing: /var/run/dpdk/spdk_pid136417 00:31:08.761 Removing: /var/run/dpdk/spdk_pid136473 00:31:08.761 Removing: /var/run/dpdk/spdk_pid136496 00:31:08.761 Removing: /var/run/dpdk/spdk_pid136612 00:31:08.761 Removing: /var/run/dpdk/spdk_pid70601 00:31:08.761 Removing: /var/run/dpdk/spdk_pid70746 00:31:08.761 Removing: /var/run/dpdk/spdk_pid70943 00:31:08.761 Removing: /var/run/dpdk/spdk_pid71031 00:31:08.761 Removing: /var/run/dpdk/spdk_pid71053 00:31:08.761 Removing: /var/run/dpdk/spdk_pid71168 00:31:08.761 Removing: /var/run/dpdk/spdk_pid71186 00:31:08.761 Removing: /var/run/dpdk/spdk_pid71304 00:31:08.761 Removing: /var/run/dpdk/spdk_pid71482 00:31:08.761 Removing: /var/run/dpdk/spdk_pid71666 00:31:08.761 Removing: /var/run/dpdk/spdk_pid71737 00:31:08.761 Removing: /var/run/dpdk/spdk_pid71813 00:31:08.761 Removing: /var/run/dpdk/spdk_pid71904 00:31:08.761 Removing: /var/run/dpdk/spdk_pid71972 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72011 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72046 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72102 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72202 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72629 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72681 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72732 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72748 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72815 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72833 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72900 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72916 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72967 00:31:08.761 Removing: /var/run/dpdk/spdk_pid72985 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73025 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73043 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73171 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73203 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73278 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73329 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73354 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73412 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73448 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73482 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73517 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73546 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73580 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73615 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73646 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73684 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73713 00:31:08.761 Removing: /var/run/dpdk/spdk_pid73747 00:31:09.020 Removing: /var/run/dpdk/spdk_pid73782 00:31:09.020 Removing: /var/run/dpdk/spdk_pid73811 00:31:09.020 Removing: /var/run/dpdk/spdk_pid73851 00:31:09.020 Removing: /var/run/dpdk/spdk_pid73880 00:31:09.020 Removing: /var/run/dpdk/spdk_pid73920 00:31:09.020 Removing: /var/run/dpdk/spdk_pid73949 00:31:09.020 Removing: /var/run/dpdk/spdk_pid73981 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74024 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74053 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74094 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74158 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74247 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74574 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74593 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74617 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74655 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74666 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74688 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74704 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74714 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74759 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74774 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74829 00:31:09.020 Removing: /var/run/dpdk/spdk_pid74910 00:31:09.020 Removing: /var/run/dpdk/spdk_pid75660 00:31:09.020 Removing: /var/run/dpdk/spdk_pid77365 00:31:09.020 Removing: /var/run/dpdk/spdk_pid77638 00:31:09.020 Removing: /var/run/dpdk/spdk_pid77933 00:31:09.020 Removing: /var/run/dpdk/spdk_pid78172 00:31:09.020 Removing: /var/run/dpdk/spdk_pid78785 00:31:09.020 Removing: /var/run/dpdk/spdk_pid83498 00:31:09.020 Removing: /var/run/dpdk/spdk_pid87572 00:31:09.020 Removing: /var/run/dpdk/spdk_pid88317 00:31:09.020 Removing: /var/run/dpdk/spdk_pid88350 00:31:09.020 Removing: /var/run/dpdk/spdk_pid88648 00:31:09.020 Removing: /var/run/dpdk/spdk_pid89929 00:31:09.020 Removing: /var/run/dpdk/spdk_pid90299 00:31:09.020 Removing: /var/run/dpdk/spdk_pid90351 00:31:09.020 Removing: /var/run/dpdk/spdk_pid90723 00:31:09.020 Removing: /var/run/dpdk/spdk_pid93157 00:31:09.020 Clean 00:31:09.020 05:22:09 -- common/autotest_common.sh@1451 -- # return 0 00:31:09.020 05:22:09 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:09.020 05:22:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:09.020 05:22:09 -- common/autotest_common.sh@10 -- # set +x 00:31:09.020 05:22:09 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:09.020 05:22:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:09.020 05:22:09 -- common/autotest_common.sh@10 -- # set +x 00:31:09.020 05:22:09 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:09.278 05:22:09 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:09.278 05:22:09 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:09.278 05:22:09 -- spdk/autotest.sh@391 -- # hash lcov 00:31:09.278 05:22:09 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:09.278 05:22:09 -- spdk/autotest.sh@393 -- # hostname 00:31:09.278 05:22:09 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:09.278 geninfo: WARNING: invalid characters removed from testname! 00:31:41.370 05:22:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:41.629 05:22:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:44.203 05:22:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:47.487 05:22:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:49.390 05:22:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:51.918 05:22:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:55.202 05:22:54 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:55.202 05:22:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:55.202 05:22:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:55.202 05:22:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.202 05:22:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.202 05:22:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.202 05:22:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.202 05:22:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.202 05:22:54 -- paths/export.sh@5 -- $ export PATH 00:31:55.203 05:22:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.203 05:22:54 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:55.203 05:22:54 -- common/autobuild_common.sh@447 -- $ date +%s 00:31:55.203 05:22:54 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721712174.XXXXXX 00:31:55.203 05:22:54 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721712174.2nOjKI 00:31:55.203 05:22:54 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:31:55.203 05:22:54 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:31:55.203 05:22:54 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:31:55.203 05:22:54 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:31:55.203 05:22:54 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:55.203 05:22:54 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:55.203 05:22:54 -- common/autobuild_common.sh@463 -- $ get_config_params 00:31:55.203 05:22:54 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:55.203 05:22:54 -- common/autotest_common.sh@10 -- $ set +x 00:31:55.203 05:22:54 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:31:55.203 05:22:54 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:31:55.203 05:22:54 -- pm/common@17 -- $ local monitor 00:31:55.203 05:22:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:55.203 05:22:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:55.203 05:22:54 -- pm/common@25 -- $ sleep 1 00:31:55.203 05:22:54 -- pm/common@21 -- $ date +%s 00:31:55.203 05:22:54 -- pm/common@21 -- $ date +%s 00:31:55.203 05:22:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721712174 00:31:55.203 05:22:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721712174 00:31:55.203 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721712174_collect-vmstat.pm.log 00:31:55.203 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721712174_collect-cpu-load.pm.log 00:31:55.770 05:22:55 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:31:55.770 05:22:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:55.770 05:22:55 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:55.770 05:22:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:55.770 05:22:55 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:55.770 05:22:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:55.770 05:22:55 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:55.770 05:22:55 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:55.770 05:22:55 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:55.770 05:22:55 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:56.029 05:22:56 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:56.029 05:22:56 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:56.029 05:22:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:56.029 05:22:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:56.029 05:22:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:56.029 05:22:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:31:56.029 05:22:56 -- pm/common@44 -- $ pid=139108 00:31:56.029 05:22:56 -- pm/common@50 -- $ kill -TERM 139108 00:31:56.029 05:22:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:56.029 05:22:56 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:31:56.029 05:22:56 -- pm/common@44 -- $ pid=139110 00:31:56.029 05:22:56 -- pm/common@50 -- $ kill -TERM 139110 00:31:56.029 + [[ -n 5852 ]] 00:31:56.029 + sudo kill 5852 00:31:56.037 [Pipeline] } 00:31:56.053 [Pipeline] // timeout 00:31:56.058 [Pipeline] } 00:31:56.073 [Pipeline] // stage 00:31:56.078 [Pipeline] } 00:31:56.092 [Pipeline] // catchError 00:31:56.098 [Pipeline] stage 00:31:56.100 [Pipeline] { (Stop VM) 00:31:56.107 [Pipeline] sh 00:31:56.441 + vagrant halt 00:31:59.727 ==> default: Halting domain... 00:32:06.307 [Pipeline] sh 00:32:06.586 + vagrant destroy -f 00:32:10.794 ==> default: Removing domain... 00:32:10.807 [Pipeline] sh 00:32:11.085 + mv output /var/jenkins/workspace/iscsi-vg-autotest/output 00:32:11.095 [Pipeline] } 00:32:11.115 [Pipeline] // stage 00:32:11.122 [Pipeline] } 00:32:11.143 [Pipeline] // dir 00:32:11.149 [Pipeline] } 00:32:11.168 [Pipeline] // wrap 00:32:11.175 [Pipeline] } 00:32:11.195 [Pipeline] // catchError 00:32:11.207 [Pipeline] stage 00:32:11.210 [Pipeline] { (Epilogue) 00:32:11.226 [Pipeline] sh 00:32:11.535 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:18.104 [Pipeline] catchError 00:32:18.106 [Pipeline] { 00:32:18.119 [Pipeline] sh 00:32:18.395 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:18.396 Artifacts sizes are good 00:32:18.405 [Pipeline] } 00:32:18.423 [Pipeline] // catchError 00:32:18.435 [Pipeline] archiveArtifacts 00:32:18.443 Archiving artifacts 00:32:19.515 [Pipeline] cleanWs 00:32:19.530 [WS-CLEANUP] Deleting project workspace... 00:32:19.530 [WS-CLEANUP] Deferred wipeout is used... 00:32:19.536 [WS-CLEANUP] done 00:32:19.538 [Pipeline] } 00:32:19.555 [Pipeline] // stage 00:32:19.561 [Pipeline] } 00:32:19.579 [Pipeline] // node 00:32:19.584 [Pipeline] End of Pipeline 00:32:19.641 Finished: SUCCESS